patent_id
stringlengths
7
8
description
stringlengths
125
2.47M
length
int64
125
2.47M
11860203
DETAILED DESCRIPTION Reference now will be made in detail to embodiments, one or more examples of which are illustrated in the drawings. Each example is provided by way of explanation of the embodiments, not limitation of the present disclosure. In fact, it will be apparent to those skilled in the art that various modifications and variations can be made to the embodiments without departing from the scope of the present disclosure. For instance, features illustrated or described as part of one embodiment can be used with another embodiment to yield a still further embodiment. Thus, it is intended that aspects of the present disclosure cover such modifications and variations. Example aspects of the present disclosure are directed to dual channel differential sensors, such as sensors that produce dual channel differential signals, such as sinusoidal outputs. As examples, the sensors can be or can include inductive sensors, such as inductive position sensors (e.g., rotational position sensors), inductive motor sensors, inductive gearbox sensors, magnetic encoders, and/or control systems such as electronic brake boosters, electronic brake systems, steering systems, torque control systems, and/or any other suitable sensor configured to produce differential signals. For instance, the sensors can produce differential signals in response to suitable stimuli, characteristics, phenomena, and/or other targets that the sensors are configured to measure. As one example, the sensors may be configured to measure position and/or movement of a target (e.g., rotational motion of the target) by measuring interaction of the target with the sensor through electromagnetic induction. As used herein, a “differential signal” includes at least a correlated pair of component signals. The component signals can be combined (e.g., additively and/or subtractively combined with respect to polarity) to produce the differential signal. The component signals can be electrical signals, such as analog signals and/or digital signals. For example, the component signals can be or can include electrical signals such as voltage signals, current signals, etc. As one example, the component signals can be measured and/or sampled from a coil, such as a sinusoidal coil. The dual channel differential sensors can include two (or more) independent channels that convey redundant, related and/or identical information. Although, conventionally, sensors may at least partially operate with information from only one channel (e.g., a single channel may capture all information needed for an intended measurement by the sensor), including two or more channels conveying correlated, redundant, and/or otherwise corroborating information can provide for a number of improvements in sensor functionality, such as, for example, improved reliability and/or robustness. For instance, information from a first channel can be cross-checked against information from a second channel to verify desired operation of the sensor. As another example, including two or more channels can provide improved safety of the sensors and/or systems operating based on sensor measurements. For example, disagreement between the channels can be indicative of a fault condition and/or otherwise undesirable operation (e.g., miscalibration) in a larger system (e.g., a motor). For instance, disagreement between the channels (e.g., greater than a certain tolerance) may be used in triggering a warning, fault resolution action, braking action, shutdown, etc., and may be provided to a technician or other individual for troubleshooting and/or repair, and/or provided in other suitable manners to ensure safe and reliable operation of a system. Additionally and/or alternatively, differential signals can provide improved noise tolerance of a sensor. For example, sensors and/or systems employing the sensors can include components, such as lengths of wiring, that are sensitive to electromagnetic interference and/or other forms of noise present in an environment during operation of the sensor. As one example, some sensors can be vulnerable to common mode noise. Differential signals can be beneficial in mitigating effects of common mode noise. However, including differential signals and/or two or more channels can contribute to increased cabling required to convey outputs from the sensor. For example, each differential signal can require two or more signal lines to convey the output. If each channel produces two differential signals (e.g., sine and cosine), then each channel can require four signal lines. Thus, a dual channel differential sensor can require eight or more signal lines coupled to the sensor and/or otherwise included within the sensor to communicate all information from the signals. This increased cabling can contribute to increased manufacturing, operating, and/or maintenance costs, decreased reliability (e.g., greater chance of line breakage, loose connections, etc.), increased sensitivity to electromagnetic interference, noise, cross-talk, etc., and/or other disadvantages. Thus, it can be desirable to decrease a number of signal lines required to communicate information from a dual channel differential sensor while maintaining most or all benefits associated with the dual channel differential sensor, such as increased safety, reliability, and/or noise tolerance, especially in safety-critical applications (e.g., vehicle control). One solution to this problem is to use only single-ended outputs including one of each pair of component signals having common polarity, such as only sine+and/or cosine+signals, from each channel. This approach can reduce a total number of signal lines, as it requires only four signal lines including, for example, a sine+and cosine+from each channel. However, this approach effectively removes the differential characteristic of the signals. Thus, this approach can be vulnerable to electrical noise, such as common mode noise. According to example aspects of the present disclosure, a dual channel differential sensor can be configured to experience reduced cabling while maintaining advantages of the dual channel differential sensor, including safety, reliability, and noise tolerance. The dual channel differential sensor can be any one or more of an inductive sensor, an inductive motor sensor, an inductive gearbox sensor, an inductive position sensor, a magnetic encoder, an electronic brake booster, an electronic brake system, a steering system, a torque control system, and/or any other suitable dual channel differential sensor in accordance with example aspects of the present disclosure. The dual channel differential sensor can include a first channel and a second channel. The second channel can be independent from the first channel. The first channel can be configured to produce a first component signal. Additionally and/or alternatively, the second channel can be configured to produce a second component signal. The first component signal can have a first polarity and the second component signal can have a second polarity. The second polarity can oppose the first polarity. As used herein, a polarity can refer to a designed interpretation of the signals (e.g., a cross-channel differential signal labeled as positive or negative by convention). Additionally and/or alternatively, opposing polarity can refer to a phase difference of about 180 degrees and/or greater than about 90 degrees when accounting for designed phase differences between the channels and/or between the signals (e.g., channel phase differences and/or output phase differences). For example, a sine signal having a first polarity may be about 90 degrees out of phase with a cosine signal having a same polarity by virtue of convention. As another example, a sine signal from a first channel may be about 135 degrees out of phase with a cosine signal having a same polarity from a second channel having a 45 degree channel phase offset with the first channel. For instance, the phase offset may be greater than 90 degrees with a common polarity due to the channel phase offset. As used herein, polarity is intended to refer to differential relationships between the signals, and is not necessarily related to a polarity of values of the signals (e.g., a component signal with negative polarity may still have a positive value at some or all points). According to example aspects of the present disclosure, the differential sensor can determine a cross-channel differential signal based at least in part on the first component signal and the second component signal and provide the first cross-channel differential signal as an output of the dual channel differential sensor. For example, the cross-channel differential signal can be formed from component signals from independent channels, which can provide for improved robustness, safety, noise tolerance (e.g., common mode noise tolerance) and/or other advantages associated with a plurality of channels while providing a reduced cabling (e.g., requiring only two signals instead of four). As used herein, a “channel” refers to any suitable system for conveying enough information to perform desired measurements using the differential sensor, such as signal lines, circuitry, coils, etc. For instance, in some embodiments, each channel can include one or more coils (e.g., receive coil(s) and/or transmit coil(s)), channel circuitry configured to energize and/or measure a signal at the coil(s) (e.g., a coil signature) and/or produce component signals based on the measured signals (e.g., coil signatures), and/or one or more signal lines to transmit differential signals (e.g., component signals). In some embodiments, each channel can produce one or more component signals associated with one or more differential signals. For example, in some embodiments, a channel can produce at least one component signal for each of at least two distinct differential signals, such as, for example, a sine differential signal and a cosine differential signal. For instance, a first channel may produce a component signal associated with a first differential signal (e.g., a sine differential signal) and a second differential signal (e.g., a cosine differential signal). In some embodiments, only one of a pair of component signals associated with each differential signal may be produced by a single channel. A corresponding component signal from a second channel can be used with the component signal from the first channel to produce a cross-channel differential signal. For instance, in some embodiments, a first component signal from a first channel and a second component signal from a second can be sine signals and/or the first cross-channel differential signal can be a sine differential signal. For example, a sine+signal from a first channel can be combined with a sine- signal from a second channel to produce a cross-channel sine differential signal. For example, in some embodiments, a first channel can be configured to produce a third component signal. The third component signal can have a first polarity (e.g., a same polarity as a first component signal from the first channel). Additionally and/or alternatively, a second channel can be configured to produce a fourth component signal. The fourth component signal can have a second polarity (e.g., a same polarity as a second component signal from the second channel). The third component signal and the fourth component signal can be used to determine a second cross-channel differential signal based at least in part on the third component signal and the fourth component signal. The second cross-channel differential signal can be provided (e.g., in addition to the first cross-channel differential signal) as an output of the dual channel differential sensor. For instance, in some embodiments, the third component signal and the fourth component signals can be cosine signals and/or the second cross-channel differential signal can be a cosine differential signal. For example, a cosine+ signal from a first channel can be combined with a cosine− signal from a second channel to produce a cross-channel cosine differential signal. The cross-channel cosine differential signal may be about 90 degrees out of phase with the cross-channel sine differential signal. Thus, in some embodiments, the component signals from the first channel can be positive component signals (e.g., sine+ and/or cosine+ signals) and the component signals from the second channel can be negative component signals (e.g., sine− and/or cosine− signals). Additionally and/or alternatively, one positive component signal and/or one negative component signal from each of the first channel and the second channel can be used. In some embodiments, the sensor can further be configured to determine an output angle based at least in part on the first cross-channel differential signal and the second cross-channel differential signal. For instance, in some embodiments, the output angle can be a two-argument arctangent of the first cross-channel differential signal and/or the second cross-channel differential signal. For example, in some embodiments, the output angle can be determined by arctan 2(SINout, COSout), where SINoutis the first cross-channel differential signal (e.g., a sine differential signal) and COSoutis the second cross-channel differential signal (e.g., a cosine differential signal). In some embodiments, one or both of the first channel and/or the second channel can include one or more coils configured to interact with a target and produce one or more coil signatures in response to interaction with the target. In some embodiments, the coil(s) can be or can include a receive coil and/or a transmit coil. For example, the coil signatures can be receive signals measured from and/or sampled from a receive coil, which may be produced in response to a transmit signal at a transmit coil. In some embodiments, the coil(s) can be or can include sinusoidally shaped coils (e.g., sinusoidal receive coils). For example, a shape of the coil can be designed to produce a sinusoidal component signal in response to rotational motion, linear motion, and/or other desired motion of a target. In some embodiments, the coil(s) are rotationally offset around a central axis. For example, the coil(s) can be rotationally offset by a channel phase difference (e.g., between channels) and/or an output phase difference (e.g., between each differential signal in a channel). In some embodiments, the channel phase difference can be negligible (e.g., about zero degrees, such as less than about 5 degrees). Additionally and/or alternatively, the channel(s) can include channel circuitry configured to produce component signals of one or more differential signals in response to the one or more coil signatures. The channel circuitry can be independently provided for each channel. For example, channel circuitry associated with the first channel can be disposed in a first integrated circuit (IC), such as an application-specific integrated circuit (ASIC), and channel circuitry associated with the second channel can be disposed in a second integrated circuit. The second integrated circuit can be different from (e.g., a separate IC from) the first integrated circuit. For example, in some embodiments, the channel circuitry can be configured to process the coil signatures and produce a sinusoidal component signal where a phase of the sinusoidal component signal corresponds to a rotational orientation and/or position of a target. For example, in some embodiments, the first component signal and the second component signal can be sinusoidal signals (e.g., sine signals) and the third component signal and the fourth component signals can be sinusoidal signals (e.g., cosine signals) that are phase shifted relative to the first component signal and the second component signal. Additionally and/or alternatively, the channel(s) can include an interface configured to provide one of the first differential signal and the second differential signal (e.g., of the first channel) or the third differential signal and the fourth differential signal (e.g., of the second channel), such as one or both component signals thereof. For example, the channel(s) can each include an interface including one or more signal lines configured to provide pairs of signals associated with the differential signals. In some embodiments, the sensor may include an interface with signal lines and/or couplings only for utilized signals (e.g., one of each pair, as described herein). In some embodiments, the sensor may include an interface with signal lines and/or couplings for each signal, and connections may be made only with desired signals, to reduce cabling as described herein. In some implementations, each of the channels can be configured to produce one or both component signals of two differential signals. For instance, the first channel can produce one or both component signals associated with a first differential signal and a second differential signal. Additionally and/or alternatively, the second channel can produce one or both component signals associated with a third differential signal and a fourth differential signal. The third differential signal can correspond with the first differential signal. For example, the third differential signal may convey identical, redundant, or otherwise corroborating information to the first differential signal. Additionally and/or alternatively, the fourth differential signal can correspond with the second differential signal. For example, the fourth differential signal may convey identical, redundant, or otherwise corroborating information to the second differential signal. As one example, both the first differential signal and the third differential signal can be a sine output. Additionally and/or alternatively, both the second differential signal and the fourth differential signal can be a cosine output. For instance, each of the differential signals can include a pair of component signals. For example, the first differential signal can include a first pair of component signals. Additionally and/or alternatively, the second differential signal can include a second pair of component signals. Additionally and/or alternatively, the third differential signal can include a third pair of component signals. The third pair of component signals can correspond to the first pair of component signals. For instance, in some embodiments, the third pair of component signals can be nearly equivalent to and/or equivalent to the first pair of component signals and/or a phase-shifted first pair of component signals. Additionally and/or alternatively, the fourth differential signal can include a fourth pair of component signals. The fourth pair of component signals can correspond to the second pair of component signals. For instance, in some embodiments, the fourth pair of component signals can be nearly equivalent to and/or equivalent to the second pair of component signals and/or a phase-shifted second pair of component signals. The sensor may be configured to produce and/or make available for measurement one or both of each pair of component signals. For example, in some implementations, one of each pair of component signals can be omitted from production and the sensor can be configured to experience reduced cabling as described herein. Each component signal of a pair of component signals can have an associated polarity. For example, a first signal of the pair of component signals can have a first polarity (e.g., positive), and a second signal of the pair of component signals can have a second polarity opposite to the first polarity (e.g., negative). The component signals can be combined (e.g., additively combined) based at least in part on their respective polarity to produce the differential signal. For example, a second signal having a negative polarity can be subtracted from a first signal having a positive polarity to produce the differential signal. The combination can be performed in an analog domain, such as by analog combination directly on analog component signals, and/or in a digital domain, such as by digital combination on digital component signals, digital samples of analog component signals, and/or in any other suitable manner. According to example aspects of the present disclosure, a cross-channel differential signal can be produced by taking opposite component signals from a corresponding differential signal of each channel and combining the opposite component signals with regard to polarity and/or phase offsets. In some embodiments, the component signals and/or differential signals can be sinusoidal signals, such as sine signals and/or cosine signals. For instance, in some embodiments, the first differential signal and the third differential signal each can be a differential sine signal, and the second differential signal and the fourth differential signal each can be a differential cosine signal. For example, in some embodiments, the pairs of component signals can include a sine+ signal, sine− signal, cosine+ signal, cosine− signal, etc. As another example, in some embodiments, the differential signals can be or can include a sine output and/or a cosine output. For example, one or both channels may be configured to produce a sine output and a cosine output. For example, in some embodiments, the pair of component signals may be measured from one or more coils (e.g., receive coils) rotationally disposed about a central axis. A sine output may be measured from a first coil and/or a cosine output may be measured from a second coil rotationally disposed out of phase by an output phase difference, such as a second coil that is 90 degrees out of phase with the first coil. For example, the second coil may be structurally similar and/or identical to the first coil and rotated about the central axis by 90 degrees to produce the cosine output. The dual channel differential sensor can include a sensing circuit. For example, the sensing circuit can be included as part of and/or separate from the channel circuitry. For example, the sensing circuit can be included in a package (e.g., an integrated circuit, computing device, etc.) that interfaces with the dual channel differential sensor by an interface, such as an interface including one or more signal lines. The signal lines can be pins (e.g., on an integrated circuit), traces, wires, cables, and/or other suitable systems configurable for signal transmission. According to example aspects of the present disclosure, a number of signal lines necessary to interface with the dual channel differential sensor can be reduced while maintaining advantages associated with the dual channel differential sensor. The sensing circuit can be configured to obtain (e.g., receive and/or sample) the component signals and produce a cross-channel differential signal of the dual channel differential sensor. For instance, the sensing circuit can obtain a first component signal from a first channel and a second component signal from a second channel. The second channel may be independent from the first channel. The first component signal can have a first polarity and/or the second component signal can have a second polarity. The second polarity can be opposing the first polarity. Additionally and/or alternatively, the sensing circuit can obtain a third component signal having the first polarity from the first channel and a fourth component signal having the second polarity from the second channel. For example, the sensing circuit can obtain the signals via an interface including one or more signal lines that are coupled to the dual channel differential sensor (e.g., a sensing circuit, coils, etc.). In some embodiments, each of the obtained component signals can have an associated signal line, such as for a total of four signal lines. In some embodiments, the component signals described above may each be one of a pair of component signals for differential signals, and signal lines associated with other component signals of each differential signal can be omitted from the sensor (e.g., the interface and/or couplings to the interface) to provide a reduced cabling (e.g., reduction in a number of signal lines) required for the sensing circuit to interface with the dual channel differential sensor (e.g., from eight signal lines to four signal lines). Omitting the other signal lines can additionally and/or alternatively contribute to reduced cost (e.g., reduced operating and/or manufacturing cost), reduced bus width, reduced computation requirements (e.g., require fewer signals to process, fewer signals to sample/measure at the coils, etc.), and/or various other advantages. For instance, in some embodiments, the interface may provide couplings to the omitted signals that may not be connected. In some embodiments, the interface may omit couplings to the omitted signals entirely. In some embodiments, each of the signals can have an associated phase. For instance, a phase of the first component signal can differ from a phase of the third component signal by a channel phase difference. Additionally and/or alternatively, a phase of the second component signal can differ from a phase of the fourth component signal by the channel phase difference. In some embodiments, the first component signal and the second component signal each can be sine component signals, such as component signals associated with a differential sine signal. Additionally and/or alternatively, the first cross-channel differential signal can be a sine differential signal (e.g., a sine output). For example, the first cross-channel differential signal can be associated with a sine signal (e.g., wherein a zero value corresponds to a phase of 0 and/or 180 degrees) and can be computed based on component signals of differential sine signals from each channel. Additionally and/or alternatively, the third component signal and the fourth component signal each can be a cosine component signal, such as component signals associated with a differential cosine signal. Additionally and/or alternatively, the second cross-channel differential signal can be a cosine output. For example, the second cross-channel differential signal can be associated with a cosine signal (e.g., wherein a zero value corresponds to a phase of 90 and/or 270 degrees) and can be computed based on component signals of differential cosine signals from each channel. Additionally and/or alternatively, the sensing circuit can be configured to determine a first cross-channel differential signal based at least in part on the first component signal and the second component signal. For instance, the first component signal and the second component signal can be combined based on respective polarities. For example, the first component signal may have a first polarity (e.g., positive) and the second component signal may have a second polarity (e.g., negative) and the second component signal may be added to and/or subtracted from the first component signal (e.g., subtracted from based on a negative polarity). In some embodiments, the signals may be adjusted to resolve phase differences (e.g., channel phase differences) prior to determining the first cross-channel differential signal. Additionally and/or alternatively, the sensing circuit can be configured to determine a second cross-channel differential signal based at least in part on the third component signal and the fourth component signal. For instance, the third component signal and the fourth component signal can be combined based on respective polarities. For example, the third component signal may have a first polarity (e.g., positive) and the fourth component signal may have a second polarity (e.g., negative) and the fourth component signal may be added to and/or subtracted from the third component signal (e.g., subtracted based on a negative polarity). In some embodiments, the signals may be adjusted to resolve phase differences (e.g., channel phase differences) prior to determining the second cross-channel differential signal. The second cross-channel differential signal can be a cosine output of the dual channel differential sensor. The second cross-channel differential signal may correspond to a desired output of the sensor. For example, the sensor may be configured to produce an overall cosine output. For example, in some embodiments, the overall cosine output can be obtained by subtracting a cosine− output of a second channel from a cosine+ output of a first channel. As one example, the second cross-channel differential signal can be computed as COSout=cosine1+−cosine2−, wherein cosine1+ is a positive cosine component signal from a first channel and cosine2− is a negative cosine component signal from a second channel. Additionally and/or alternatively, the sensing circuit can be configured to provide the first cross-channel differential signal as a first output of the dual channel differential sensor and/or to provide the second cross-channel differential signal as a second output of the dual channel differential sensor. For example, the sensor may include an external interface configured to be energized by the first cross-channel differential signal and/or the second cross-channel differential signal such that the signals can be provided to an external device capable of reading the signals. Additionally and/or alternatively, the sensing circuit can implement a safety check to verify desired operation of the sensor and/or system(s) coupled to the sensor (e.g., a motor, gearbox, control system, encoder, etc.). For instance, the sensing circuit can implement the safety check to verify that the sensor is operating correctly and/or operational conditions of the system(s) are safe and/or accurate. Operation of the sensor and/or system(s) can be adjusted based on the safety check. For example, operation of the system can be halted, warnings can be issued, etc. based on the safety check. For instance, such as to implement the safety check, the sensing circuit can determine a first channel angle based at least in part on the first component signal and the third component signal. For example, the component signals can both be from a same channel (e.g., the first channel) and/or have a same polarity (e.g., positive). For instance, in some embodiments, the signals can include a sine+ signal and a cosine+ signal from a first channel. In some embodiments, the first channel angle can be a two-argument arctangent (e.g., atan2) function. For example, the first channel angle can be determined by atan2(sine+, cosine+). Additionally and/or alternatively, the sensing circuit can determine a second channel angle based at least in part on the second component signal and the fourth component signal. For example, the signals can both be from a same channel (e.g., the second channel) and/or have a same polarity (e.g., negative). For instance, in some embodiments, the signals can include a sine− signal and a cosine− signal from a second channel. In some embodiments, the second channel angle can be a two-argument arctangent (e.g., atan2) function. For example, the second channel angle can be determined by atan2(sine−, cosine−). Additionally and/or alternatively, the sensing circuit can determine a cross-channel angle difference based at least in part on the first channel angle and the second channel angle. For example, in some embodiments, the sensing circuit can subtract the second channel angle from the first channel angle to determine the cross-channel angle difference. Additionally and/or alternatively, in some embodiments, the sensing circuit can further determine that the cross-channel angle difference is within a cross-channel correlation tolerance margin. For example, the cross-channel correlation tolerance margin can be or can include a threshold (e.g., a magnitude threshold), a minimum and/or maximum, etc. For instance, in some embodiments, determining that the cross-channel angle difference is within the cross-channel correlation tolerance margin can include determining that a magnitude of the cross-channel angle difference is less than a correlation tolerance threshold, such as a correlation tolerance threshold δ. In some embodiments, in response to determining that the cross-channel angle difference is within the cross-channel correlation tolerance margin, the sensor can be considered to be under normal operating conditions. For example, measurements can be obtained from the sensor and/or no correction control action related to correcting operation of the sensor may be performed. In some embodiments, in response to determining that the cross-channel angle difference is not within the cross-channel correlation tolerance margin, the sensing circuit can initiate and/or otherwise perform one or more correction control actions to correct operation of the sensor and/or otherwise adjust operation of one or more system(s) coupled to the sensor and/or for which the sensor is configured to monitor conditions. For example, in some cases, in response to determining that the cross-channel angle difference is not within the cross-channel correlation tolerance margin, the correction control action can be or can include a flag, warning, fault resolution action, braking action, shutdown, and/or other suitable correction control actions to ensure safe and reliable operation of a system. As an example, in one implementation, a dual channel differential sensor can produce a sine differential signal and a cosine differential signal at a first channel including sine1+, sine1−, cosine1+, and cosine1− signals. Additionally and/or alternatively, a second channel can produce sine2+, sine2−, cosine2+, and cosine2− signals. Signal lines can be included that are associated with the sine1+, cosine1+, sine2−, and cosine2− signals. Additionally and/or alternatively, signals lines associated with sine1−, cosine1−, sine2+, and cosine2+ can be omitted from the sensor to provide reduced cabling. Additionally and/or alternatively, measuring points or other circuitry used to measure the signals having omitted signal lines may additionally be omitted, such that the signals with omitted signal lines may not have any dedicated components at the sensor and may exist only as convention. An overall sine output of the sensor can be computed as SINout=sine1+−sine2−. Additionally and/or alternatively, an overall cosine output of the sensor can be computed as COSout=cosine1+−cosine2−. If necessary, angular compensations can be included in the computations. Additionally and/or alternatively, an angle between the signals can be computed as atan2(SINout, COSout). For instance, the angle can be resistant to effects of common mode noise. These signals can additionally be used to perform a safety check. For instance, a first channel angle between the signals can be computed as angle1+=atan2(sine1+, cosine1+). A second channel angle between the signals can be computed as angle2+=atan2(sine2−, cosine2−). The second channel angle can be subtracted from the first channel angle to produce a cross-channel angle difference by anglediff=angle1+−angle2−. The cross-channel angle difference can be checked as being within a cross-channel correlation angle margin (e.g., having a magnitude less than a safety threshold, such as δ). If the cross-channel angle difference is outside of the cross-channel correlation angle margin (e.g., having a magnitude greater than and/or equal to the safety threshold), the sensor may be operating in an unintended capacity, and various safety measures can be performed based on the results of the safety check (e.g., implementing a correction control action). In practice, such as due to sensor design, manufacturing variations, etc., slight discrepancies may be observed between the channels, such as between the first component signal and the third component signal and/or the second component signal and the fourth component signal without departing from example aspects of the present disclosure. For example, such as due to limited space on a circuit board or other substrate, coils generating the first component signal and the third component signal may be offset, such as rotationally offset, such that a known channel phase difference is introduced between the first component signal and the third component signal. Similarly, coils generating the second component signal and the fourth component signal may be offset, such as rotationally offset, such that a known channel phase difference (e.g., the same channel phase difference as between the first and third component signals) is introduced between the second component signal and the fourth component signal. As one example, the dual channel component sensor may be implemented using two channels of sinusoidal coils that are coaxially positioned. Because of interference between the coils, spatial limitations, etc., the coils associated with each channel may be offset by a known phase difference, such as about 45 degrees. In some embodiments, the sensor can be designed (e.g., using a multi-layer PCB) such that no channel phase difference exists, and/or may be designed to at least partially compensate for a channel phase difference (e.g., having a channel phase difference of less than about 45 degrees, such as less than about 15 degrees) Additionally and/or alternatively, variations such as manufacturing variations, supply differences, noise, etc. may introduce minor amplitude differences between the channels, such that the first channel may have a first amplitude (e.g., common to the first and second differential signals) and the second channel may have a second amplitude (e.g., common to the third and fourth differential signals). For example, performance (e.g., gains) of circuitry (e.g., an application-specific integrated circuit (ASIC)) associated with the first channel may differ slightly from performance of circuitry (e.g., an ASIC) associated with the second channel due to manufacturing variations, supply differences, noise, etc., which may contribute to amplitude variations. As another example, tuning algorithms, such as automatic gain stabilization algorithms, may converge to different solutions from circuitry to circuitry. Thus, the first amplitude and second amplitude may desirably be identical, in some embodiments, but may nonetheless experience a slight (e.g., less than about 10%) variation in practice. Thus, in some embodiments, a channel phase compensation can be applied to measurements from the sensor. For instance, the channel phase compensation can be applied to the output angle to correct for phase discrepancies between the first channel and the second channel. For instance, in some embodiments, determining an output angle between the first cross-channel differential signal and the second cross-channel differential signal can include applying a channel phase correction to the output angle. The channel phase correction can be based at least in part on the channel phase difference. For instance, in some embodiments, the channel phase correction can be applied to correct the channel phase difference. Additionally and/or alternatively, in some embodiments, the channel phase correction can be based at least in part on an amplitude of the first channel and an amplitude of the second channel. For instance, in some embodiments, minor variations between the first and second channel can contribute to minor differences in amplitudes of each channel. For instance, in some embodiments, determining an output angle between the first cross-channel differential signal and the second cross-channel differential signal can include determining an amplitude of the first channel and determining an amplitude of the second channel. For example, amplitudes of the channels can be determined by measuring amplitude (e.g., maximum amplitude) of the channels over one or more cycles (e.g., complete periods) of the component signals. For example, the amplitude of a channel may correspond to an amplitude of a component signal (e.g., one of the pairs of component signals) and/or a differential signal (e.g. after combining the pair of component signals) and/or any other suitable signals (e.g., intermediate signals) associated with the channel. Additionally and/or alternatively, in some embodiments, determining the output angle can include determining the channel phase correction based at least in part on the amplitude of the first channel, the amplitude of the second channel, and the phase difference. For example, in some embodiments, determining the angular offset can be based on the formula: Δθ=(φ/2)·((A−B)/(A+B)) where Δθ is the channel phase correction, φ is the channel (e.g., physical) phase difference, A is the amplitude of the first channel, and B is the amplitude of the second channel. Additionally and/or alternatively, in some embodiments, determining the output angle can include applying the channel phase correction to the output angle. For example, the channel phase correction can be combined with (e.g., additively combined with) the output angle, such as added to and/or subtracted from the output angle. Example aspects of the present disclosure can provide for a number of technical effects and benefits. For instance, according to example aspects of the present disclosure, a dual channel differential sensor can be configured to experience reduced cabling while maintaining advantages of the dual channel differential sensor, including safety, reliability, and noise tolerance. For example, obtaining signals from the sensor as described herein (e.g., a sine+ signal and cosine+ signal from a first channel and a sine− signal and cosine− signal from a second channel) can require only half a number of signal lines compared to obtaining four complete differential signals. Additionally and/or alternatively, obtaining sensor measurements as described herein can provide low noise measurements having robustness to noise, such as common mode noise, while achieving the reduction in signal lines. For instance, the signals as described herein can maintain differential characteristics (e.g., between signals of opposite polarities) that provide resistance to effects of common mode noise. Additionally and/or alternatively, the use of signals from two channels can provide improved safety and reliability while achieving the reduction in signal lines. For example, the use of signals from two channels can provide for safety checks between the signals from two channels, which can be robust to operational variations. Referring now to the FIGS., example aspects of the present disclosure will be discussed in greater detail with respect to example implementations of the present disclosure. FIG.1Adepicts a block diagram of at least a portion of an example dual channel differential sensor100according to example embodiments of the present disclosure. The differential sensor100can include first channel110and/or second channel130. First channel110can include first channel circuitry112. First channel circuitry112can be configured to process signals related to sensor measurements and produce first differential signal120and/or second differential signal125. First differential signal120can include first component signal122and second component signal124. In some embodiments, first differential signal120can be or can include a differential sinusoidal output. For example, first component signal122can be a sinusoidal signal and second component signal124can be a sinusoidal signal having opposite polarity from first component signal122. For example, in some embodiments, first component signal122and second component signal124can be180degrees out of phase. As one example, first component signal122can be a sine+ signal and second component signal124can be a sine− signal. Additionally and/or alternatively, in some embodiments, second differential signal125can be or can include a differential sinusoidal output. For instance, second differential signal125can include first component signal126and second component signal128. For example, first component signal126can be a sinusoidal signal and second component signal128can be a sinusoidal signal having opposite polarity from first component signal126. For example, in some embodiments, first component signal126and second component signal128can be 180 degrees out of phase. Additionally and/or alternatively, first component signals122and126and/or second component signals124and128can have a known phase difference, such as an output phase difference. As one example, second differential signal125can be configured as a cosine output. As one example, first component signal126can be a cosine+ signal and second component signal128can be a cosine− signal. The cosine signals of second differential signal125can thus have a 90 degrees phase difference with the sine signals of first differential signal120. For example, many systems can operate with regard to sine and cosine measurements from a sensor, such as an inductive rotation sensor. Additionally and/or alternatively, in some embodiments, first channel110can include receive coils114and/or transmit coils116. For instance, first channel circuitry112can be configured to energize transmit coils116. The energized transmit coils116can produce an electromagnetic field that interacts with target105. The electromagnetic field can further interact with receive coils114, and/or may be affected by target105. For instance, the electromagnetic field can induce receive signals (e.g., inductive currents) in the receive coils114. The first channel circuitry112can measure, sample, and/or process the receive signals to produce the differential signals120,125. As one example, transmit coils116and receive coils114are discussed with reference toFIG.3. Other suitable arrangements of coils114,116and/or other suitable sensor arrangements (e.g., magnetic encoders) can be employed in accordance with example aspects of the present disclosure. Additionally and/or alternatively, differential sensor100can include second channel130. Second channel130can include second channel circuitry132. Second channel circuitry132can be configured to process signals related to sensor measurements and produce third differential signal140and/or fourth differential signal145. Third differential signal140can include first component signal142and second component signal144. In some embodiments, third differential signal140can be or can include a differential sinusoidal output. For example, first component signal142can be a sinusoidal signal and second component signal144can be a sinusoidal signal having opposite polarity from first component signal142. For example, in some embodiments, first component signal142and second component signal144can be180degrees out of phase. As one example, first component signal142can be a sine+ signal and second component signal144can be a sine− signal. Additionally and/or alternatively, in some embodiments, fourth differential signal145can be or can include a differential sinusoidal output. For instance, fourth differential signal145can include first component signal146and second component signal148. For example, first component signal146can be a sinusoidal signal and second component signal148can be a sinusoidal signal having opposite polarity from first component signal146. For example, in some embodiments, first component signal146and second component signal148can be 180 degrees out of phase. Additionally and/or alternatively, first component signals142and146and/or second component signals144and148can have a known phase difference, such as an output phase difference. As one example, fourth differential signal145can be configured as a cosine output. As one example, first component signal146can be a cosine+ signal and second component signal148can be a cosine− signal. The cosine signals of fourth differential signal145can thus have a 90 degrees phase difference with the sine signals of third differential signal140. For example, many systems can operate with regard to sine and cosine measurements from a sensor, such as an inductive rotation sensor. First differential signal120and third differential signal140can be correlated, such that first differential signal120corresponds to third differential signal140. For instance, the third differential signal140may convey identical, redundant, or otherwise corroborating information to the first differential signal120. For example, in some embodiments, the pair of component signals122,124can be nearly equivalent to and/or equivalent to the pair of component signals142,144(e.g., a phase-shifted pair of component signals142,144). Additionally and/or alternatively, second differential signal125and fourth differential signal145can be correlated, such that second differential signal125corresponds to fourth differential signal145. For instance, the fourth differential signal145may convey identical, redundant, or otherwise corroborating information to the second differential signal125. For example, in some embodiments, the pair of component signals126,128can be nearly equivalent to and/or equivalent to the pair of component signals146,148(e.g., a phase-shifted pair of component signals146,148). The first channel110and/or second channel130can be coupled to a sensing circuit150. For example, a sensing circuit150can be configured to obtain component signals122,126,144,148as illustrated inFIG.1A. Other suitable signals of any of the differential signals120,125,140,145in accordance with example aspects of the present disclosure. The sensing circuit150can be configured to process measurements from the sensor100. For example, the sensing circuit150can obtain cross-channel differential signals and/or perform safety checks. The sensing circuit150may be separate from sensor100and/or incorporated into sensor100. For instance, in some embodiments, a sensing circuit150can be or can include a microcontroller and/or other suitable circuitry. In some embodiments, the sensing circuit can be configured to sample (e.g., digitally sample) the component signals122,126,144,148and/or perform angle calculations for the output. In these embodiments, and/or if an angle and phase difference between the channels is near or exactly zero, the angles can be averaged to remove noise. As another example, the sensor100can include a differential input stage (e.g., in a sensing circuit150) such that measurements can be directly obtained of the analog signals and/or calculations can be performed directly on the analog signals. For instance, the differential input stage can perform analog computations to directly compute values of the cross-channel differential signals. This can provide improved noise removal characteristics. FIG.1Bdepicts a block diagram of at least a portion of an example dual channel differential sensor160according to example embodiments of the present disclosure. The sensor160is similar to the sensor100ofFIG.1A, but signal lines associated with omitted signals (e.g., signal lines124,128,142,146ofFIG.1A) are omitted from the sensor. Thus, whileFIG.1Adepicts an embodiment where the omitted signals exist (e.g., are produced by channel circuitry112,132) and are simply not connected,FIG.1Bdepicts an embodiment where the omitted signals are removed from the sensor entirely, and only four component signals are produced by the channel circuitry112,132. Both configurations depicted inFIG.1AandFIG.1B, in addition to and/or alternatively to any other suitable variations, may be employed in accordance with example aspects of the present disclosure. FIG.2depicts plots200of example component signals forming an example differential signal from an example channel according to example embodiments of the present disclosure. For example, component signals202and204are associated with first differential signal210. For instance, first component signal202can have a first polarity (e.g., positive) and second component signal204can have a second polarity (e.g., negative). Second component signal204can be subtracted from first component signal202to produce first differential signal210. For instance, first differential signal210can be a sine output. Additionally and/or alternatively, component signals206and208are associated with second differential signal212. For instance, first component signal202can have a first polarity (e.g., positive) and second component signal204can have a second polarity (e.g., negative). Second component signal204can be subtracted from first component signal202to produce second differential signal212. For instance, second differential signal212can be a cosine output. FIG.3depicts example sensor coils300according to example embodiments of the present disclosure. For instance, the coils300can be disposed on a substrate, such as formed of traces on a printed circuit board (PCB), flexible printed circuit board, and/or other suitable substrate. For example, in some embodiments, the coils300are formed on a multi-layer substrate, such as a two-layer PCB. For example, each layer of a multi-layer substrate can include one or more of the coils300. Additionally and/or alternatively, the coils300can be cut from metal sheets, formed of wound or bent wire or other conductive filament, and/or formed in any suitable fashion in accordance with example aspects of the present disclosure. The coils300, such as the transmit coils310and/or the receive coils302-308, can be disposed around a central axis315. In some embodiments, the receive coils302-308may be configured to produce a pair of component signals. For instance, the receive coils302-308may each be measured at two points to produce the pair of component signals, only one of which may be utilized at a sensing circuit, in accordance with example aspects of the present disclosure. Additionally and/or alternatively, the receive coils302-308may be measured at only a single point. The coils300can include transmit coils310. The transmit coils310can be energized (e.g., by channel circuitry) to produce an electromagnetic field. For example, the transmit coils can be energized by any suitable electrical signal, such as a voltage signal, current signal, etc., and/or a constant signal and/or time-varying signal. The electromagnetic field produced by the transmit coils can interact with an environment of a sensor, such as a target, and may be altered by elements in the environment, such as the target. For example, a target can be configured to produce fringing fields that attenuate and/or boost the electromagnetic field at particular regions. As one example, a target can include a rotational target that rotates (e.g., coaxially with central axis315) to alter the electromagnetic field as a function of rotational position of the target. Additionally and/or alternatively, the coils300can include receive coils302,304,306, and308. First receive coil302can be configured to produce a first differential signal. For instance, a receive signal (e.g., an inductive current) can be induced in first receive coil302by an electromagnetic field, such as an electromagnetic field from transmit coils310, induced by a target, present in87an environment of the coils, and/or from any other suitable source. Channel circuitry can sample and/or measure the first receive coil302to produce a pair of component signals associated with a first differential signal from the first receive coil302. Similarly, second receive coil304can be configured to produce a second differential signal. For example, channel circuitry (e.g., associated with a same channel as first receive coil302) can sample and/or measure the second receive coil304to produce a pair of component signals associated with a second differential signal from the second receive coil304. For instance, the first receive coil302and second receive coil304can form a single channel. As illustrated inFIG.3, the first receive coil302and second receive coil304are sinusoidal coils configured to produce a sinusoidal differential signal. Additionally and/or alternatively, as illustrated inFIG.3, the second receive coil304is rotationally offset by90degrees with respect to the first receive coil302. Thus, the first receive coil302can be configured to produce a sine output and the second receive coil304can be configured to produce a cosine output. Similarly, third receive coil306can be configured to produce a third differential signal and fourth receive coil308can be configured to produce a fourth differential signal. For instance, third receive coil306and/or fourth receive coil308can be sampled and/or measured by channel circuitry associated with a different channel from first receive coil302and/or second receive coil304. For instance, in some embodiments, first receive coil302and/or second receive coil304can be disposed on a first layer of a multi-layer substrate and third receive coil306and/or fourth receive coil308can be disposed on a second layer of the multi-layer substrate. As illustrated inFIG.3, third receive coil306and/or fourth receive coil308are rotationally offset about central axis315by a channel phase offset of 45 degrees with respect to the first receive coil302and/or second receive coil304. For instance, this channel phase offset can be implemented due to spatial limitations of a substrate containing the coils300, interference considerations between the coils300, etc. In some embodiments, the coils300may instead have a channel phase offset of less than about 45 degrees, such as less than about 15 degrees. As another example, in some embodiments, the coils300may have a channel phase offset of 0 degrees. FIG.4depicts a flowchart diagram of an example method400for operating a dual channel differential sensor according to example embodiments of the present disclosure. AlthoughFIG.4depicts steps performed in a particular order for purposes of illustration and discussion, the methods of the present disclosure are not limited to the particularly illustrated order or arrangement. The various steps of the method400can be omitted, rearranged, combined, and/or adapted in various ways without deviating from the scope of the present disclosure. For instance, the method400can include, at402, obtaining a first component signal from a first channel and a second component signal from a second channel. The second channel may be independent from the first channel. The first component signal can have a first polarity and/or the second component signal can have a second polarity. The second polarity can be opposing the first polarity. Additionally and/or alternatively, the method400can include, at404, obtaining a third component signal having the first polarity from the first channel and a fourth component signal having the second polarity from the second channel. For example, a sensing circuit can obtain the signals via an interface including one or more signal lines that are coupled to the dual channel differential sensor (e.g., a sensing circuit, coils, etc.). In some embodiments, each of the obtained component signals can have an associated signal line, such as for a total of four signal lines. In some embodiments, the component signals described above may each be one of a pair of component signals for differential signals, and signal lines associated with other component signals of each differential signal can be omitted from the sensor (e.g., the interface and/or couplings to the interface) to provide a reduced cabling (e.g., reduction in a number of signal lines) required for a sensing circuit to interface with the dual channel differential sensor (e.g., from eight signal lines to four signal lines). Omitting the other signal lines can additionally and/or alternatively contribute to reduced cost (e.g., reduced operating and/or manufacturing cost), reduced bus width, reduced computation requirements (e.g., require fewer signals to process, fewer signals to sample/measure at the coils, etc.), and/or various other advantages. For instance, in some embodiments, the interface may provide couplings to the omitted signals that may not be connected. In some embodiments, the interface may omit couplings to the omitted signals entirely. In some embodiments, each of the signals can have an associated phase. For instance, a phase of the first component signal can differ from a phase of the third component signal by a channel phase difference. Additionally and/or alternatively, a phase of the second component signal can differ from a phase of the fourth component signal by the channel phase difference. In some embodiments, the first component signal and the second component signal each can be sine component signals, such as component signals associated with a differential sine signal. Additionally and/or alternatively, the first cross-channel differential signal can be a sine differential signal (e.g., a sine output). For example, the first cross-channel differential signal can be associated with a sine signal (e.g., wherein a zero value corresponds to a phase of θ and/or 180 degrees) and can be computed based on component signals of differential sine signals from each channel. Additionally and/or alternatively, the third component signal and the fourth component signal each can be a cosine component signal, such as component signals associated with a differential cosine signal. Additionally and/or alternatively, the second cross-channel differential signal can be a cosine output. For example, the second cross-channel differential signal can be associated with a cosine signal (e.g., wherein a zero value corresponds to a phase of 90 and/or 270 degrees) and can be computed based on component signals of differential cosine signals from each channel. Additionally and/or alternatively, the method400can include, at406, determining a first cross-channel differential signal based at least in part on the first component signal and the second component signal. For instance, the first component signal and the second component signal can be combined based on respective polarities. For example, the first component signal may have a first polarity (e.g., positive) and the second component signal may have a second polarity (e.g., negative) and the second component signal may be added to and/or subtracted from the first component signal (e.g., subtracted from based on a negative polarity). In some embodiments, the signals may be adjusted to resolve phase differences (e.g., channel phase differences) prior to determining the first cross-channel differential signal. Additionally and/or alternatively, the method400can include, at408, determining a second cross-channel differential signal based at least in part on the third component signal and the fourth component signal. For instance, the third component signal and the fourth component signal can be combined based on respective polarities. For example, the third component signal may have a first polarity (e.g., positive) and the fourth component signal may have a second polarity (e.g., negative) and the fourth component signal may be added to and/or subtracted from the third component signal (e.g., subtracted based on a negative polarity). In some embodiments, the signals may be adjusted to resolve phase differences (e.g., channel phase differences) prior to determining the second cross-channel differential signal. The second cross-channel differential signal can be a cosine output of the dual channel differential sensor. The second cross-channel differential signal may correspond to a desired output of the sensor. For example, the sensor may be configured to produce an overall cosine output. For example, in some embodiments, the overall cosine output can be obtained by subtracting a cosine− output of a second channel from a cosine+ output of a first channel. As one example, the second cross-channel differential signal can be computed as COSout=cosine1+−cosine2−, wherein cosine1+ is a positive cosine component signal from a first channel and cosine2− is a negative cosine component signal from a second channel. Additionally and/or alternatively, the method400can include, at410, providing the first cross-channel differential signal as a first output of the dual channel differential sensor. Additionally and/or alternatively, the method400can include, at412, providing the second cross-channel differential signal as a second output of the dual channel differential sensor. For example, the sensor may include an external interface configured to be energized by the first cross-channel differential signal and/or the second cross-channel differential signal such that the signals can be provided to an external device capable of reading the signals. In some embodiments, a channel phase compensation can be applied to measurements from the sensor. For instance, the channel phase compensation can be applied to the output angle to correct for phase discrepancies between the first channel and the second channel. For instance, in some embodiments, determining an output angle between the first cross-channel differential signal and the second cross-channel differential signal can include applying a channel phase correction to the output angle. The channel phase correction can be based at least in part on the channel phase difference. For instance, in some embodiments, the channel phase correction can be applied to correct the channel phase difference. Additionally and/or alternatively, in some embodiments, the channel phase correction can be based at least in part on an amplitude of the first channel and an amplitude of the second channel. For instance, in some embodiments, minor variations between the first and second channel can contribute to minor differences in amplitudes of each channel. For instance, in some embodiments, determining an output angle between the first cross-channel differential signal and the second cross-channel differential signal can include determining an amplitude of the first channel and determining an amplitude of the second channel. For example, amplitudes of the channels can be determined by measuring amplitude (e.g., maximum amplitude) of the channels over one or more cycles (e.g., complete periods) of the component signals. For example, the amplitude of a channel may correspond to an amplitude of a component signal (e.g., one of the pairs of component signals) and/or a differential signal (e.g. after combining the pair of component signals) and/or any other suitable signals (e.g., intermediate signals) associated with the channel. Additionally and/or alternatively, in some embodiments, determining the output angle can include determining the channel phase correction based at least in part on the amplitude of the first channel, the amplitude of the second channel, and the phase difference. For example, in some embodiments, determining the angular offset can be based on the formula: Δθ=(φ/2)·((A−B)/(A+B)) where Δθ is the channel phase correction, φ is the channel phase difference, A is the amplitude of the first channel, and B is the amplitude of the second channel. Additionally and/or alternatively, in some embodiments, determining the output angle can include applying the channel phase correction to the output angle. For example, the channel phase correction can be combined with (e.g., additively combined with) the output angle, such as added to and/or subtracted from the output angle. FIG.5depicts a flowchart diagram of an example method500for operating a dual channel differential sensor according to example embodiments of the present disclosure. AlthoughFIG.5depicts steps performed in a particular order for purposes of illustration and discussion, the methods of the present disclosure are not limited to the particularly illustrated order or arrangement. The various steps of the method500can be omitted, rearranged, combined, and/or adapted in various ways without deviating from the scope of the present disclosure. For instance, a sensor (e.g., a sensing circuit) can implement steps of method500as safety check to verify desired operation of the sensor and/or system(s) coupled to the sensor (e.g., a motor, gearbox, control system, encoder, etc.). For instance, a sensing circuit can implement the method500(e.g., the safety check) to verify that the sensor is operating correctly and/or operational conditions of the system(s) are safe and/or accurate. Operation of the sensor and/or system(s) can be adjusted based on the safety check. For example, operation of the system can be halted, warnings can be issued, etc. based on the safety check. The method500can include, at502, obtaining a first component signal and a third component signal from a first channel and a second component signal and a fourth component signal from a second channel. The second channel may be independent from the first channel. The first component signal can have a first polarity and/or the second component signal can have a second polarity. The second polarity can be opposing the first polarity. The third component signal can have the first polarity. The fourth component signal can have the second polarity. For example, a sensing circuit can obtain the signals via an interface including one or more signal lines that are coupled to the dual channel differential sensor (e.g., a sensing circuit, coils, etc.). In some embodiments, each of the obtained component signals can have an associated signal line, such as for a total of four signal lines. The method500can include, at504, determining a first channel angle based at least in part on the first component signal and the third component signal. For example, the component signals can both be from a same channel (e.g., the first channel) and/or have a same polarity (e.g., positive). For instance, in some embodiments, the signals can include a sine+ signal and a cosine+ signal from a first channel. In some embodiments, the first channel angle can be a two-argument arctangent (e.g., atan2) function. For example, the first channel angle can be determined by atan2(sine+, cosine+). Additionally and/or alternatively, the method500can include, at506, determining a second channel angle based at least in part on the second component signal and the fourth component signal. For example, the signals can both be from a same channel (e.g., the second channel) and/or have a same polarity (e.g., negative). For instance, in some embodiments, the signals can include a sine− signal and a cosine− signal from a second channel. In some embodiments, the second channel angle can be a two-argument arctangent (e.g., atan2) function. For example, the second channel angle can be determined by atan2(sine−, cosine−). Additionally and/or alternatively, the method500can include, at508, determining a cross-channel angle difference based at least in part on the first channel angle and the second channel angle. For example, in some embodiments, the sensing circuit can subtract the second channel angle from the first channel angle to determine the cross-channel angle difference. Additionally and/or alternatively, in some embodiments, the method500can further include, at510, determining that the cross-channel angle difference is within a cross-channel correlation tolerance margin. For example, the cross-channel correlation tolerance margin can be or can include a threshold (e.g., a magnitude threshold), a minimum and/or maximum, etc. For instance, in some embodiments, determining that the cross-channel angle difference is within the cross-channel correlation tolerance margin can include determining that a magnitude of the cross-channel angle difference is less than a correlation tolerance threshold, such as a correlation tolerance threshold S. In some embodiments, in response to determining that the cross-channel angle difference is within the cross-channel correlation tolerance margin, the sensor can be considered to be under normal operating conditions. For example, measurements can be obtained from the sensor and/or no correction control action related to correcting operation of the sensor may be performed. Additionally and/or alternatively, the method500can include, at512, determining that the cross-channel angle difference is not within the cross-channel correlation tolerance margin. The method500can further include, at514, in response to determining that the cross-channel angle difference is not within the cross-channel correlation tolerance margin, performing one or more correction control actions to correct operation of the sensor and/or otherwise adjust operation of one or more system(s) coupled to the sensor and/or for which the sensor is configured to monitor conditions. For example, in some cases, in response to determining that the cross-channel angle difference is not within the cross-channel correlation tolerance margin, the correction control action can be or can include a warning, fault resolution action, braking action, shutdown, and/or other suitable correction control actions to ensure safe and reliable operation of a system. For instance, one example embodiment of the present disclosure can include a dual channel differential sensor. The dual channel differential sensor can include a first channel including one or more first receive coils configured to produce one or more first coil signatures in response to interaction with a target, a first channel circuit configured to produce a first sine component signal and a first cosine component signal in response to the one or more first coil signatures, the first sine component signal and first cosine component signal having a first polarity, and a first interface configured to provide the first sine component signal and the first cosine component signal. Additionally, the dual channel differential sensor can include a second channel that includes one or more second receive coils configured to produce one or more second coil signatures in response to interaction with the target, a second channel circuit independent from the first channel circuit, the second channel circuit configured to produce a second sine component signal and a second cosine component signal in response to the one or more second coil signatures, the second sine component signal and second cosine component signal having a second polarity opposing the first polarity, and a second interface configured to provide the second sine component signal and the second cosine component signal. Additionally, the dual channel differential sensor can include a sensing circuit that is configured to obtain the first sine component signal, the first cosine component signal, the second sine component signal, and the second cosine component signal, determine a cross-channel sine differential signal based at least in part on the first sine component signal and the second sine component signal and a cross-channel cosine differential signal based at least in part on the first cosine component signal and the second cosine component signal, determine an output angle based at least in part on a two-argument arctangent of the cross-channel sine differential signal and the cross-channel cosine differential signal, and provide the cross-channel sine differential signal, cross-channel cosine differential signal, and output angle as outputs of the dual channel differential sensor. As used herein, “about” in conjunction with a stated numerical value is intended to refer to within 20% of the stated numerical value. While the present subject matter has been described in detail with respect to specific example embodiments thereof, it will be appreciated that those skilled in the art, upon attaining an understanding of the foregoing may readily produce alterations to, variations of, and equivalents to such embodiments. Accordingly, the scope of the present disclosure is by way of example rather than by way of limitation, and the subject disclosure does not preclude inclusion of such modifications, variations and/or additions to the present subject matter as would be readily apparent to one of ordinary skill in the art.
76,380
11860204
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS The technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the accompanying drawings in the present invention. Obviously, the described embodiments are only a part of the embodiments of the present invention, rather than all the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those skilled in the art without creative work shall fall within the protection scope of the present invention. Please refer toFIG.1andFIG.2.FIG.1is a top view structural diagram of a display device provided by an embodiment of the present invention.FIG.2is a structural diagram at A inFIG.1. The present invention provides a display device10. The display device10comprises a display panel100, a circuit board200, and a chip-on-film300. The details are as follows: The circuit board200comprises a first detection part210and a second detection part220. The chip-on-film300is electrically connected to the display panel100and the circuit board200. Wherein, the chip-on-film300comprises a plurality of pins310. The plurality of pins310comprise a plurality of conducting pins311and test pins312. At least one conducting pin311is electrically connected to the first detection part210. In this embodiment, a conducting pin311is electrically connected to the first detection part210. The test pins312are disposed on one side of the conducting pins311. The test pins312are electrically connected to the second detection part220. The conducting pins311electrically connected to the first detection part210are connected to the test pins312by test wires400. Specifically, the first detection part210comprises a first sub-detection part211. The second detection part220comprises a second sub-detection part221. The test wires400comprise a first sub-test wire410. The first sub-test wire410is disposed on the chip-on-film200. The test pins312comprise a first sub-test pin3121. The first sub-test wire410is connected to a conducting pin311and the first sub-test pin3121. The first sub-test pin3121can electrically connect to the conducting pins311which are adjacent to the first sub-test wire410by first sub-test wire410, or the first sub-test pin3121can electrically connect to the conducting pins311which are away from the first sub-test pin3121by the first sub-test wire410, and no restrictions are made here. In this embodiment, the first sub-test pin3121being electrically connected to the conducting pins311adjacent to the first sub-test wire410by the first sub-test wire410is described as an example. Then, the conducting pins311electrically connected to the first sub-test wire410are connected to the first sub-detection part211. The first sub-test pin3121electrically connected to the first sub-test wire410is connected to the second sub-detection part221. In one embodiment, please refer toFIG.3,FIG.3is a structural diagram at B inFIG.1. The pins310comprise a wire320and a ground wire330. The wire320is disposed on one side of the plurality of pins310. The ground wire330is disposed on the other side of the plurality of pins310. The wire320and the ground wire330are insulated from the pins310. The wire320and the ground wire330are grounded. Specifically, the wire320is disposed on one side of the first pin310on the chip-on-film300. The ground wire330is disposed on one side of the last pin310on the chip-on-film300. In another embodiment, the wire320and the ground wire330need not be both insulated from the pins310, that is to say, any one of the wire320and the ground wire330can be insulated from the pins310. In present invention, the wire and the ground wire are disconnected from the pins, that is to say, the wire and the ground wire are insulated from the pin, avoiding the connection of the wire and the ground wire to four short pins on the chip-on-film. The four short pins are grounded, thus, the grounding of the pins is avoided, the abnormal display of the display device is avoided, and the display performance of the display device is guaranteed. In present invention, by disposing a first sub-test wire on the chip-on-film, the first sub-test pin is short circuited with the conducting pins, and the first sub-test pin is connected to the second sub-detection part, and the conducting pins are connected to the first sub detection part, thus forming an impedance circuit for detecting the bonding region of the circuit board, and then the impedance of the bonding region of the circuit board can be detected. The structure is simple, and without destroying the structure in the display device, the impedance of the bonding region of the circuit board can be detected to ensure the display performance of the display device. In another embodiment, the first detection part210comprises a third sub-detection part212. The second detection part220comprises a fourth sub-detection part222. The test wires400comprise a second sub-test wire420, and the second sub-test wire420is disposed on the display panel100. Next, the test pins312comprise a second sub-test pin3122, and the second sub-test wire420is connected to an end part of the conducting pin311and an end part of the second sub-test pin3122. That is to say, the second sub-test wire3122is disposed on the display panel100. The conducting pins311which are electrically connected to the second sub-test wire420are connected to the third sub-detection part212. The second sub-test pin3122which is electrically connected to the second sub-test wire420is electrically connected to the fourth sub-detection part222. In present invention, the second sub-test wire is connected to the end of the second sub-test pin and the end of the conducting pins, and the second sub-test pin is connected to the fourth sub-detection part and the conducting pins are connected to the third sub-detection part, forming a line to detect the total impedance of the bonding region of the circuit board and the bonding region of the display panel. Finally, according to the total impedance of the bonding region of the circuit board and the bonding region of the display panel and the impedance of the bonding region of the circuit board, the impedance of the bonding region of the display panel is obtained. Please refer toFIG.4,FIG.4is another structural diagram at A inFIG.1. It should be noted that the differences betweenFIG.4andFIG.2are as follows: Two of the conducting pins311are electrically connected to the first detection part210, further improving the detection accuracy of impedance. Please refer toFIG.5.FIG.5is yet another structural diagram at A inFIG.1. It should be noted that the differences betweenFIG.5andFIG.2are as follows: The first sub-test wire410is disposed on the side near the circuit board200, further improving the detection accuracy of impedance. Please refer toFIG.2andFIG.6,FIG.2is a top view structural diagram of the display device provided by the embodiment of the present invention.FIG.6is a flow chart of a preparation method of the display device provided by the embodiment of the present invention. The present invention also provides a detection method for the impedance of the display device10, and the detection method is as follows: Step S21, disposing a chip-on-film300between a display panel100and a circuit board300. The chip-on-film300is electrically connected to the display panel100and the circuit board200. The chip-on-film300comprises a plurality of pins310. The plurality of pins310comprise a plurality of conducting pins311and test pins312. The circuit board200comprises a first detection part210and a second detection part220. Specifically, the first detection part210comprises a first sub-detection part211and a third sub-detection part212, and the second detection part220comprises a second sub-detection part221and a fourth sub-detection part222. Step S22, electrically connecting any one of the conducting pins311and the test pins312by test wires400. Specifically, the test wires400comprise a first sub-test wire410and a second sub-test wire420. The test pins312comprise a first sub-test pin3121and a second sub-test pin3122. Then, disposing the first sub-test wire410on the chip-on-film300, and connecting one end part of the first sub-test wire410to the first sub-test pin3121, and connecting the other end part of the first sub-test wire410to any one of the conducting pins311. Then, disposing the second sub-test wire420on the display panel100, and connecting one end part of the second sub-test wire420to the end part of the conducting pins311, and connecting the other end part of the second sub-test wire420to the end part of the second sub-test pin3122. Step S23, electrically connecting the first detection part210and at least one conducting pin311, and connecting the second detection part220to the test pins312. Step S24, detecting the first detection part210and the second detection part220to detect the impedance. Specifically, disposing the first sub-detection part211and the second sub-detection part221on the circuit board200, connecting the first sub-detection part211to the conducting pins311, and connecting the second sub-detection part221to the first sub-test pin3121. Then, correspondingly connecting two detection terminals of the detecting device to the first sub-detection part211and the second sub-detection part221to detect the impedance of the bonding region of the circuit board200. The detection device can be a multimeter, but is not limited to this. That is to say, when the two detection ends of the multimeter are respectively connected to the first sub-detection part211and the second sub-detection part221, the multimeter, the second sub-detection part221, the first sub-test pin3121, the first sub-test wire410, the conducting pin311, and the first sub detection part211form an impedance line for detecting the bonding region of the circuit board200, and the impedance of the bonding region of the circuit board200is detected. Then, disposing the third sub-detection part212and the fourth sub-detection part222on the circuit board200, connecting the third sub-detection part212to the conducting pins311, and connecting the fourth sub-detection part222to the second sub-test pin3122. Then, correspondingly connecting two detection terminals of the detecting device to the third sub-detection part212and the fourth sub-detection part222to detect the total impedance of the bonding region of the circuit board200and the bonding region of the display panel100. According to the impedance of the bonding region of the circuit board200and the total impedance of the bonding region of the circuit board200and the bonding region of the display panel100, the impedance of the bonding region of the display panel100is obtained. That is to say, after detecting the impedance of the bonding region of the circuit board200, the two detection terminals of the multimeter are respectively connected to the third sub-detection part212and the fourth sub-detection part222; the multimeter, the third sub-detection part212, the conducting pins311, the second sub-test wire420, the third sub-test pin3122, and the fourth sub-detection part222form a circuit for detecting the total impedance of the bonding region of the circuit board200and the display panel100, and the total impedance of the bonding region of the circuit board200and the display panel100is detected. In present invention, by disposing the first sub-test wire on the chip-on-film and disposing the second sub-test wire on the display panel, the first sub-test pin is connected to the conducting pin, the first sub-test pin is connected to the second sub-detection part, and the conducting pin is connected to the first detection part. The multimeter is connected to the first sub-detection part and the second sub-detection part to detect the impedance of the bonding region of the circuit board, that is, without damaging the structure of the display device, the impedance of the area of the circuit board is detected, which ensures the normal display of the display device. By disposing the second sub-test wire on the display panel, the second sub-test wire connects the conducting pin to the second sub-test pin, the conducting pins are connected to the third sub-detection part, and the second sub-test pin is connected to the fourth sub-detection part. The multimeter is connected to the third sub-detection part and the fourth sub-detection part. The total impedance of the bonding region of the circuit board and the bonding region of the display panel can be obtained. The difference between the impedance of the bonding region of the circuit board and the total impedance of the bonding region of the display panel and the bonding region of the circuit board is the impedance of the bonding region of the display panel. This method is simple and easy to operate, and it does not damage the structure of the display device, so as to ensure that the display device can display normally. The present invention provides a display device and a detection method for impedance of the display device by disposing test wires in the display device and electrically connecting the first detection part and the test pins through the test wires, and the test pins are electrically connected to the second detection part to detect the first detection part and the second detection part. This allows the impedance of the display device to be obtained without destroying the structure of the display device, thereby ensuring the display performance of the display device. The above provides a detailed introduction to a display device and a detection method for impedance of the display device provided by the present invention. Specific examples are used in this article to illustrate the principle and implementation of the present invention. The description of the above embodiments is only used to help understand the present invention. At the same time, for those skilled in the art, according to the idea of the present invention, there will be changes in the specific implementation and the scope of application. In summary, the content of this specification should not be construed as a limitation to the present invention.
14,296
11860205
The skilled person in the art will understand that the drawings, described below, are for illustration purposes only. The drawings are not intended to limit the scope of the applicants' teachings in anyway. Also, it will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements. DESCRIPTION OF EXAMPLE EMBODIMENTS Various apparatuses or processes will be described below to provide an example of various embodiments of the claimed subject matter. No embodiment described below limits any claimed subject matter and any claimed subject matter may cover processes, apparatuses, devices, or systems that differ from those described below. The claimed subject matter is not limited to apparatuses, devices, systems, or processes having all of the features of any one apparatus, device, system, or process described below or to features common to multiple or all of the apparatuses, devices, systems, or processes described below. It is possible that an apparatus, device, system, or process described below is not an embodiment of any claimed subject matter. Any subject matter that is disclosed in an apparatus, device, system, or process described below that is not claimed in this document may be the subject matter of another protective instrument, for example, a continuing patent application, and the applicants, inventors, or owners do not intend to abandon, disclaim, or dedicate to the public any such subject matter by its disclosure in this document. Furthermore, it will be appreciated that for simplicity and clarity of illustration, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements. Figures illustrating different embodiments may include corresponding reference numerals to identify similar or corresponding components or elements. In addition, numerous specific details are set forth in order to provide a thorough understanding of the example embodiments described herein. However, it will be understood by those of ordinary skill in the art that the example embodiments described herein may be practiced without these specific details. In other instances, well-known methods, procedures, and components have not been described in detail so as not to obscure the example embodiments described herein. Also, the description is not to be considered as limiting the scope of the example embodiments described herein. It should also be noted that the terms “coupled” or “coupling” as used herein can have several different meanings depending in the context in which the term is used. For example, as used herein, the terms “coupled” or “coupling” can indicate that two elements or devices can be directly coupled to one another or indirectly coupled to one another through one or more intermediate elements or devices via an electrical element, electromagnetic element, electrical signal, or a mechanical element such as but not limited to, a wire or cable, for example, depending on the particular context. Elements and devices may also be coupled wireless to permit communication using any wireless communication standard. For example, devices may be coupled wirelessly using Bluetooth communication, WiFi or another standard or proprietary wireless communication protocol. It should be noted that terms of degree such as “substantially”, “about”, and “approximately” as used herein mean a reasonable amount of deviation of the modified term such that the end result is not significantly changed. These terms of degree should be construed as including a deviation of the modified term if this deviation would not negate the meaning of the term it modifies. Furthermore, the recitation of any numerical ranges by endpoints herein includes all numbers and fractions subsumed within that range (e.g., 1 to 5 includes 1, 1.5, 2, 2.75, 3, 3.90, 4, and 5). It is also to be understood that all numbers and fractions thereof are presumed to be modified by the term “about” which means a variation up to a certain amount of the number to which reference is being made if the end result is not significantly changed. As stated in the background section, impedance spectroscopy has found increasing wide-spread application as a non-invasive, and non-intrusive technique for monitoring state and health properties of various electrical, electrochemical, and biological loads. During impedance spectroscopy, a load is injected (e.g., interrogated or perturbed) with one or more alternating-current (AC) signals characterized by different frequencies, or having different frequency components. At each applied frequency, the voltage and current response of the load is measured and the impedance (or complex resistance) of the load is determined in accordance with Equation (1): Z⁡(ω)=E^⁡(ω)I^⁡(ω)(1) wherein Z is the impedance of the load as a function of the applied frequency (ω), Ê is the measured potential across the load, and Î is the measured current flowing through the load. A load impedance spectrum may then be generated by plotting the calculated impedance response as a function of the applied frequencies (ω). In various cases, the impedance spectrum is plotted in the form of a real impedance versus complex impedance plot or a Bode plot. The impedance data (plotted as a spectrum or in raw form) often provides valuable information regarding electrical, physical, chemical, and biological properties of the load. For example, in many cases, the load's impedance spectrum is compared against an ideal (or expected) impedance spectrum to diagnose faults in the load's performance. In other cases, the impedance spectrum may be used to generate an equivalent circuit model of the load (e.g., a small signal model), which provides insights regarding the load's operation, as well as the load's physical or electrical structure. In various cases, the equivalent circuit model may also be used to validate physics-based theoretical models of the load which are derived from first principles. Electrical loads which may be the subject of impedance spectroscopy include, for example, motors, generators, capacitors, cables, inductors, or transformers. Impedance spectroscopy may also be performed on electrochemical loads in a technique known as electrochemical impedance spectroscopy (EIS). Electrochemical loads may include, for example, batteries (e.g., rechargeable batteries), fuel cells, electrolyzers, as well as membranes employed in membrane-based waste water treatment (e.g., reverse osmosis (RO) membranes). In various cases, EIS may be used to measure various physical phenomena that occur over varying time scales within the electrochemical loads. For instance, EIS may be used for measuring fast phenomena that occur within the electrochemical load over shorter time scales (such as electron transfer), or slower phenomena that occur within the load over longer times scales (such as corrosion). In various cases, for example, EIS may be used to determine the state of charge of a battery, electrochemical reactions occurring within batteries and fuel cells (e.g., diffusion and charge-transfer), corrosion of metals, feed flow and recovery rates of membranes used in wastewater treatment, as well as organic and inorganic fouling of these membrane. Other properties of electrochemical loads which may also be determined using EIS include: solution resistance, electrode morphology, double-layer capacitance, charge-transfer resistance, and coating capacitance. In other cases, impedance spectroscopy may be performed on biological loads in a technique known as bioimpedance spectroscopy. For example, impedance spectroscopy may be used on biological loads such as cells or membranes to determine cell and/or membrane structure, composition, and density. In various cases, different frequency ranges are required in order to evaluate or model different properties of a load. For example, and as stated previously, in some cases, low frequency ranges are used for evaluating physical load phenomena which occur over longer time scales, while high frequency ranges are useful for evaluating physical load phenomena which occur over shorter time scales. For instance, examples of applications requiring the use of high frequency ranges may include measurement of solution resistance, as well as measurement of the dielectric of materials (e.g., especially at industrial scales). Other applications which require the use of lower frequency ranges may include the measurement of corrosion effects. Accordingly, it is often necessary to interrogate a load using a wide range of frequencies in order to evaluate phenomena over a wide range of time scales and to generate an impedance spectrum containing sufficient information. In conventional systems for impedance spectroscopy which are used in industrial applications, a load is coupled to a power converter, such as a switch-mode power supply (SMPS). The SMPS may be configured to convert regulated or unregulated power to a desired regulated DC voltage output for powering the load. In other cases, the SMPS may convert a regulated DC input voltage into a desired regulated DC output voltage. To effect the conversion, the converter includes a switching device (e.g., a metal-oxide semiconductor field effect transistor (MOSFET), or an insulated-gate bipolar transistor (IGBT)) which alternates between an ON mode and an OFF mode according to a switching frequency. The switching of the transistor device results in a small AC ripple which is imposed over the DC output. When employed in impedance spectroscopy, the switching frequency is varied to generate different frequencies of AC signals. The load's impedance response is then determined as a function of the applied AC frequency. Conventional industrial impedance spectroscopy systems, however, suffer from a number of drawbacks. For example, the maximum usable output AC ripple frequency, generated by the power converter, is limited to the Nyquist rate (e.g., half the switching frequency). Further, the effective or functional bandwidth of AC ripple frequencies generated by the spectroscopy system (e.g., the bandwidth which avoids issues, such as sampling aliasing) is typically only one-tenth of the Nyquist rate. Accordingly, standard industrial SMPS devices that are configured for maximum switching frequencies of 10 KHz to 300 KHz may only generate an effective bandwidth of AC ripple frequency of between 0.5 KHz to 15 KHz. As a result, determining load impedance data at high frequency ranges may not be possible using only the limited effective frequency bandwidth that is generated using these power converter. Further, and in many cases, as the AC ripple is dependent on the switching frequency of the power supply, conventional industrial spectroscopy systems offer limited control over the amplitude, phase and frequency components of the output AC ripple. A further drawback is that operating power converters at high switching frequencies may also result in significant power loss. For example, the switching loss of a transistor increases in proportion to the switching frequency, and may be significant at very high frequency (VHF) ranges (e.g., megahertz (MHz) ranges). Switching loss can impair the efficiency of the power converter, and may result in the transistor generating excessive heat (e.g., which may cause the converter to require a larger heat sink). Still a further drawback of conventional industrial impedance spectroscopy systems is the inverse correlation between the power level of the converter and the maximum switching frequency. In particular, when the power supply is used for powering large loads, the power supply might be restricted to switching to low (5-30 kHz) switching frequencies due to a lack of available components that can manage both the level of power demand (power rating) of the load as well as the operation of the converter at the higher frequency. Accordingly, the frequency ranges generated by the industrial spectroscopy system may be limited by the power level of the converter. In view of the foregoing, and in various embodiments described herein, there is provided a load analysis signal generator which is configured to generate load analysis signals having frequencies, or frequency components, within a wide frequency range. In at least one example application, the signal generator may be used in impedance spectroscopy for determining the impedance properties of a load over a wide frequency spectrum. In other example applications, the load analysis signal generator may be also used to impose sinusoidal or transient changes to a load that may have positive effects (e.g., improvements) to the functioning or operation of the load system. As explained in further detail herein, the load analysis signal generator includes a multi-winding transformer having at least one primary winding, and at least one secondary winding. The at least one primary winding is in series connection between a DC power supply and an interrogated load. A DC current—generated by the DC power supply—flows across the at least one primary winding to power the load. In various cases, DC current flowing across the at least one primary winding may result in an accumulation of DC flux in the core of the transformer, which may otherwise saturate the core. Accordingly, the at least one secondary winding of the transformer is coupled to a variable DC generator (also referred to herein as a “de-biasing” voltage source). The “de-biasing” voltage source generates an inverse DC “de-biasing” current across the secondary winding which is configured to eliminate, or reduce, the accumulated DC flux in the core of the transformer. The “de-biasing” current minimizes power loss in the transformer and maintains the transformer's efficiency. The at least one second secondary winding of the transformer is then also coupled to a variable AC generator that generates (or induces) one or more load analysis signals across the at least one primary winding. The load analysis signals superimpose over the DC current (i.e., in the primary winding), and the combined currents are injected into the load. In various cases, the frequency of the load analysis signals may be varied and the impedance properties of the load may be determined at different frequencies of the load analysis signal. In other cases, the load analysis signal may include more than one frequency component, and the impedance response of the load may be determined in relation to each frequency component. The load analysis signal generator, which is provided herein, overcomes a number of the deficiencies inherent in conventional industrial impedance spectroscopy systems. In particular, as the signal generator does not rely on the main power converter's (SMPS) switching devices to vary the frequency of AC signals injected into the load, the signal generator is configured to generate high frequency signals without being capped at the Nyquist rate (e.g., the signal generator is not limited to an effective bandwidth of one-tenth of the Nyquist rate of the main power converter). Further, as the signal generator does not rely on varying the switching frequency to vary the frequency of the AC signal, the signal generator is also configurable, in various embodiments, to vary the amplitude, phase and frequency components of the AC signal. Still further, the signal generator may achieve high frequency AC outputs with minimal to no power loss (e.g., switching loss). The signal generator is also configurable to de-couple the inverse correlation which exists in conventional industrial spectroscopy systems as between the power demand of the load and the maximum switching frequency of the spectroscopy system (e.g., the signal generator is able to produce high frequency AC signals independent of the power demand of the load). In this manner, the signal generator is configured for use in broadband impedance spectroscopy in order to generate high resolution impedance data over an extended frequency range. This may allow for assessing a wide range of physical phenomena of a load (e.g., electrical, chemical, physical, and biological properties) that occur over short or long time scales and are determined when the load is perturbed using a wide range of frequency signals. Referring now toFIG.1, there is shown a simplified block diagram for a load impedance determining system100according to some embodiments. As shown, the system100generally includes a DC power supply102, a load analysis signal generator104, and a load106. In at least some embodiments, the system100may also include a controller108. The DC power supply102may be any suitable power supply that is configured to supply DC current (IDC) in order to power the load106(e.g., a DC voltage source). In various embodiments, the DC power supply102may also include a power converter which converts unregulated AC or DC input voltage (e.g., from a voltage source, or power grid) to a regulated DC voltage output based on the power demands of the load106. For example, in some cases, the DC power supply102can include a switch-mode power supply (SMPS) which uses a buck, boost, or a buck and boost circuit topology (e.g., a galvanically isolated or non-isolated circuit topology) to generate a regulated DC voltage output. The load analysis signal generator104is coupled in series between the power supply102and the load106. As explained in further detail herein, the signal generator104is configured to generate a sinusoidal AC signal (also referred to herein as a “load analysis signal” (IAnalysis))) which is superimposed over the DC current (IDC). The combined AC and DC signals (IDC+IAnalysis) are injected into the load106. In various embodiments, the signal generator104may be configured to generate different load analysis signals which oscillate at different frequencies. For example, where the system100is used in impedance spectroscopy, the signal generator104may inject the load106with various frequency load analysis signals, and may determine the impedance response of the load at each applied frequency. In at least some embodiments, the signal generator104may also be configured to generate load analysis signals within a wide frequency range (e.g., extending up to a megahertz (MHz) range) to provide for high resolution impedance spectrum data. In other embodiments, rather than generating multiple load analysis signals, the signal generator104may generate a single load analysis signal having multiple frequency components (also known as a mixed-frequency signal, or a multi-sine signal). The impedance response of the load may then be determined in relation to each applied frequency component. Load106is any suitable physical load which is the subject of impedance measurements. For example, where the system100is applied in electrochemical impedance spectroscopy (EIS), the load may be a battery, a fuel cell, or an electrolyzer. The load may also be a membrane which is employed in membrane-based wastewater treatment (e.g., a reverse-osmosis (RO) membrane). In other cases, the load may be an electroflotation, electrocoagulation, electroxidation and/or electrocoagulation water treatment cell. In at least some cases, the load106may be coupled to the system100using one or more electrodes. For instance, the load106may be positioned between two electrodes configured to apply the combined DC and AC voltage (i.e., generated by the power supply102, and load analysis generator106). In various embodiments, one or more sensors110may couple to the load106. The sensors110may provide data and/or information to the controller108for use in determining the impedance response of the load106to various frequency load analysis signals (or load analysis signals which include different frequency components). For example, in some embodiments described herein, the sensor110may be a voltage or current sensor that is configured to measure the AC voltage differential or current across the load106. For example, the voltage differential, in conjunction with a known value and frequency for the load analysis signal (IAnalysis), may be used by the controller108to determine the impedance response of the load in accordance with Equation (1). Controller108may be provided for controlling the various components of the system100. In at least some embodiments, the controller108may couple to the load analysis signal generator104. The controller108may then control the frequencies and/or amplitudes of the load analysis signals generated by the signal generator104. For example, in some cases, the controller108may direct the signal generator104to generate a pre-determined number of load analysis signals having pre-determined frequencies within a pre-determined frequency range. In other cases, the controller108may direct the signal generator104to generate a single load analysis signal having a pre-determined number of frequency components. The controller108may also control the time span of each load analysis signal, as well as the time-interval between consecutive load analysis signals. In other embodiments, the controller108may further couple to the sensor110. The controller108may receive data measurements (e.g., voltage and current measurements) from the sensor110, and may use the data measurements to determine the impedance response of the load106. The controller108may also be further generate an impedance spectrum of the load106based on the load's impedance response at different applied frequencies. In still other embodiments, the controller108may couple to the DC power supply102. In particular, where the DC power supply102includes a power converter with a switching device, the controller108may adjust the switching frequency of the switching device to adjust the AC ripple frequency generated by the power converter (e.g., to minimize the AC ripple). In other cases, the controller108may adjust the duty cycle of the power converter (and in some cases, the switching frequency) to vary the regulated DC output generated by the power converter in order to accommodate for the varying power demands of load106. As explained in further detail herein, the controller108may also couple to one or more sensors which are configured to measure either the DC current (IDC) flowing across the signal generator104, or other parameters which relate to the DC current (IDC). Referring now toFIG.2A, there is shown a simplified circuit diagram for the load impedance determining system100ofFIG.1, according to some embodiments. As shown, the DC power supply102may include a DC voltage source202for powering the load106. In some cases, the DC power supply102may also include a power converter204(e.g., an SMPS) which is coupled to the DC voltage source202. In the illustrated embodiment, the power converter204is a DC/DC buck converter which is configured to step down an input voltage received from the DC voltage source202. The buck converter204includes a forward-biased diode204ain parallel arrangement with a capacitor204band an inductor204ccoupled between the forward-biased diode204aand the capacitor204b. In other cases, the buck converter204may be a synchronous buck converter and may include a MOSFET in place of the diode204a. In still other cases, the buck converter can have any one of a number of suitable circuit topologies. In the illustrated example, a transistor (e.g., a MOSFET)204dis provided for switching the converter204between an ON mode and an OFF mode. The transistor204dincludes a drain node that is coupled to the DC voltage source202, and a source node that is coupled to a shared node common to both the diode204aand the inductor204c. The transistor204dalso includes a gate node, which in some embodiments, is coupled to the controller108. The controller108may control the switching frequency of the transistor204dby transmitting a pulse width modulated (PWM) signal to the gate node, which in turn, controls the transistor's operational state. In some cases, a gate driver may be located between the controller108and the gate node of transistor204din order to transform the control signal from controller108into a voltage signal for controlling the gate node. As mentioned previously, the controller108can control the transistor204dto vary the duty cycle of the power supply102based on the power demands of the load106. In some cases, the controller108may also vary the switching frequency of the transistor204dto change the oscillating frequency of an AC ripple generated by the power converter. It will be appreciated that the illustrated circuit topology for the power converter204has only been shown herein by way of example, and that other suitable circuit topologies may be employed. Still referring toFIG.2A, the DC power supply102is configured to generate a near steady-state DC current output (IDC), which in some cases, may include a small AC switching ripple. The DC current (IDC) is fed to the load analysis signal generator104. In various embodiments, the load analysis signal generator104is formed from a multi-winding transformer208which includes at least one primary-side winding210having N1winding turns, and at least one secondary winding. In the illustrated example embodiment, the at least one secondary winding includes a first secondary-side winding212having N2turns, and a second secondary-side winding214having N3turns. In other embodiments, the first and secondary-side windings may be combined into a single secondary winding. In still other embodiments, the primary-side winding may comprise, for example, a first primary-side winding and a second primary-side winding. The primary winding210is coupled in series between the DC power supply102and the load106. The primary winding210includes an input node210acoupled to the output of the DC power supply102, and an output node coupled to the load106. DC current (IDC), from the DC power supply102, accordingly flows across the primary winding210to power the load106. The first secondary winding212is coupled in series to a de-biasing circuit216, which includes a variable DC voltage generator220(also referred to herein as a “de-biasing” voltage source220). In particular, the de-biasing circuit216is configured to eliminate the DC magnetic flux that may be generated in the transformer core as a result of the DC current (IDC) flowing across the primary winding210. In this manner, the de-biasing circuit216ensures that the transformer220does not enter into saturation, and accordingly, does not suffer from reduced efficiency, increased power loss, degradative mechanisms (e.g., increased risk of transformer overheating), or otherwise results in an open circuit which renders the system non-operational. To de-bias the transformer core, the variable DC generator220generates an inverse DC current (also referred to herein as a de-biasing current (IDE-BAIS)) across the secondary winding212. The de-biasing current (IDE-BAIS) is configured to be equal in magnitude (but inverse in direction) to the DC current (IDC) and in proportion to the turns ratio of the first and secondary windings, in accordance with Equation (2): IDe-bias=N1N2⁢ID⁢⁢C(2) The de-biasing current (IDE-BAIS) generates a reverse flux in the transformer core which eliminates, or reduces, the flux bias generated by the DC current (IDC) flowing across the primary winding. Accordingly, the DC current (IDC) may flow across the primary winding without saturating the transformer208. In various cases, the de-biasing current may generate the reverse flux by configuring the variable DC generator220to generate the de-biasing current (IDE-BAIS) to flow in the opposite direction as the DC current (IDC). In other cases, the de-biasing current (IDE-BAIS) may flow in the same direction as the DC current (IDC), but the secondary winding212may be wound in the reverse direction as the primary winding210in order to generate the reverse flux. In various embodiments, the variable DC generator220may couple to the controller108, which is configured to control the de-biasing current (IDc-bias) generated by the DC generator220. For example, in at least some cases, the controller108may determine the necessary de-biasing current (IDe-bias) based on the amount of DC current (IDC) flowing through the primary winding210. For example, in the illustrated embodiment, the controller108is coupled to one or more sensors222which provide data regarding the DC current (IDC) flowing across the primary winding210. The controller108processes the data received from the sensors222and determines the appropriate de-biasing current (IDc-bias). The controller108may then adjust the variable DC generator220to generate the determined appropriate de-biasing current (IDc-bias). In this manner, the controller108may form part of a feedback loop which modifies the de-biasing voltage (or current) source220based on data from sensors222. Various sensors222may be coupled to the controller108for use in determining the DC current (IDC) flowing across the primary winding210. For example, in the illustrated embodiment, the controller108may couple to a voltage sensor222aconnected in parallel to the primary winding210(i.e., between the input node210aand the output node210b). The voltage sensor222amay measures the differential DC voltage across the primary winding210and may transmit the measured voltage reading to the controller108. The controller108may then determine the DC current (IDC) flowing across the primary winding210based on the voltage reading and a known impedance of the primary winding210. In other embodiments, the controller108may couple to a current sensor222bwhich is in series connection between the output node210b, of the primary winding210, and the load106. The current sensor222bmay directly measure the DC current (IDC) flowing across the primary winding210and may transmit this information to the controller108. Accordingly, the controller108may determine the DC current (IDC) across the primary winding directly from the data received from the current sensor222b. In other embodiments, the current sensor222bmay also be positioned between the DC power supply102and the input node210a(of the primary winding), as well as after the load106. In various cases, the current sensor222bmay also measure AC current (e.g., IAnalysis), and also transmit this measurement information to the controller108. In still yet other embodiments, a hall effect sensor222cmay be located proximate the transformer208. The hall effect sensors222cmay measure the level of DC magnetic flux present in the transformer202, and may generate a voltage reading of the recorded flux. The controller108may receive the voltage reading from the hall effect sensor222d, and may adjust the de-biasing voltage source220with a view to eliminating, or reducing, the measured DC flux in the transformer core. In various cases, the sensors222may be configured to transmit information on a continuous basis, or periodically at pre-defined time intervals, to the controller108. In other cases, the sensors may only transmit readings in response to the occurrence of certain events. For example, the sensors may transmit readings only when a change (or a significant change) is detected in a monitored parameter. In still other cases, the sensors may transmit information only at the request of the controller108. It will be appreciated that the sensor configuration illustrated inFIG.2Ahas only been shown herein by way of example, and that other sensors and/or sensor configurations may be used for determining the DC current (IDC) flowing across the primary winding210. In still other embodiments, the variable DC generator220may not be coupled to the controller108, and may be pre-configured to generate a “de-biasing” current (IDe-bias) based on a known value for the DC current (IDC), as well as a known turns ratio N2:N1between the primary and secondary windings. Referring still toFIG.2A, the second secondary winding214is coupled in series to a load analysis injection circuit224, which includes a variable AC signal generator228(also referred to herein as a load analysis signal source228). The load analysis signal source228is configured to generate a time-varying AC signal (IAC) across the secondary winding214. The AC signal (IAC) flows across the secondary winding214, and in turn, generates the load analysis signal (IAnalysis) across the primary winding210. The load analysis signal is equal in frequency to the AC signal (IAC), and is otherwise related to the AC signal in accordance with Equation (3): IAnalysis=N3N1⁢IA⁢C(3) The load analysis signal (IAnalysis) is superimposed over the DC current (IDC) in the primary winding210to generate a combined AC and DC signal (i.e., IDC+IAnalysis) that is injected into the load106. In various embodiments, the variable AC generator228may be configured to generate load analysis signals at variable frequencies, phrases and/or amplitudes. For example, where the system100is used in impedance spectroscopy, the AC generator228may generate a plurality of load analysis signals, each having different frequencies. The load analysis signals may be then separately injected into the load106, and the impedance response of the load, at each frequency, may be individually determined, i.e., to generate an impedance spectrum. In other embodiments, the variable AC generator228may generate a single load analysis signal having multiple frequency components. In at least some embodiments, the AC generator228may generate load analysis signals at high frequency ranges (or having high frequency components) which, in turn, allows for the impedance response of the load106to be determined over a wide frequency range. In particular, this allows for assessing electrical, chemical, biological and physical properties of the load106that are only determined when the load is perturbed using high frequency signals (e.g., including membrane properties, and bulk and surface resistance). As previously mentioned, the maximum frequency output of the AC signal generator228is not otherwise capped by the Nyquist rate of the DC power supply102. Additionally, the AC generator228may generate high frequency load analysis signals without suffering from consequent power loss (e.g., switching loses), which may otherwise hamper the performance of conventional industrial impedance spectroscopy systems. Accordingly, the AC generator228is able to effectively generate high resolution impedance spectroscopy data over large frequency bandwidths. In at least some embodiments, the AC generator228may further couple to the controller108. The controller108may control the frequencies of the load analysis signals generated by the AC generator228. For example, the controller108may control the AC generator228to generate a pre-determined number of discrete load analysis signals at pre-determined frequencies within a pre-determined frequency range. The impedance response of the load106may then be separately determined at each applied frequency. The controller108may also specify the time-interval between when consecutive load analysis signals are generated and injected into the load106. Accordingly, this may allow sufficient time for injecting each load analysis signal into the load106, and calculating the resultant impedance response of the load. In still other cases, rather than generating multiple AC signals at multiple frequencies, the controller108may direct the AC signal generator228to generate a single mixed-frequency AC signal having a range of low and high frequency components. In at least some cases, the AC generator228may not be coupled to the controller108, and may be pre-configured to automatically generate various load analysis signals at pre-determined frequencies and at pre-determined time intervals. Additionally, or in the alternative, the AC generator228may also be pre-configured to generate one or more load analysis signals with multiple pre-determined frequency components. In order to determine the impedance response of the load at different applied frequencies of load analysis signals (or load analysis signals with different frequency components), the controller108may couple to the sensor110and received data therefrom. In the illustrated embodiment, the sensor110is a voltage sensor which is connected in parallel arrangement to the load106. The voltage sensor measures the differential AC voltage across the load106in response to an applied load analysis signal, and transmits the voltage reading to the controller108. The controller108may then determine the impedance response of the load using the voltage reading, as well as known information regarding the magnitude and frequency of the injected load analysis signal (IAnalysis) (e.g., in accordance with Equation (1)). In some cases, where the load106is injected with a single load analysis signal having several frequency components, the controller108may be configured to de-compose the AC voltage reading—received from the voltage sensor110—into its various frequency components using any appropriate spectral and/or frequency decomposition method (e.g., a Fast Fourier Transform (FFT), or a Discrete Fourier Transform (DFT)). The controller108may then separately analyze the impedance response of the load to each applied frequency component. Referring now toFIG.2B, there is shown a simplified circuit diagram for a load impedance determining system100′ ofFIG.1, according to some other embodiments. The load impedance determining system100′ ofFIG.2Bis generally analogous to the load impedance determining system100ofFIG.2Awith the exception that the determining system does not include a de-biasing circuit216. Further, the transformer208includes only the primary-side winding210, and a single secondary winding214coupled in series to the variable AC generator228of the injection circuit224. In this embodiment, the transformer208may be selected to handle DC currents (IDC) on the order of 10 to 10,000 Amp-turns. Accordingly, for these application, a de-biasing circuit216may not be required in de-biasing the transformer core and avoiding saturation. Referring now toFIG.3, there is shown a simplified block diagram of the controller108in accordance with some embodiments. As shown therein, the controller108generally includes a processor302in communication with a memory304, a communication module306, and a user interface308. Processor302may be configured to execute a plurality of instructions to control and operate the various components of the controller108. Processor302may also be configured to receive information from the various components of controller108and to make specific determinations using this information. The determinations may then be transmitted to the memory device304and/or the communication module306. For example, in various embodiments, the processor302may be configured to receive information, via communication module306, from one or more of sensors222. The processor may then use this information to determine the DC current (IDC) flowing across the primary winding210of the transformer208. Based on this determination, the processor302may transmit, via communication module306, instructions to modify the de-biasing current (IDe-bias) generated by the variable DC generator220(i.e., to eliminate a DC flux bias in the transformer core). In other embodiments, the processor302may also be configured to transmit instructions, via communication module306, to the variable AC generator228to generate one or more load analysis signals (IAnalysis) having different frequencies, or having different frequency components, within a pre-defined frequency range. In still other embodiments, the processor302may be configured to receive, via the communication module306, voltage readings from the voltage sensor110. The processor302may then determine the impedance response of the load106based on a known frequency of a load analysis signal injected into the load106. In cases where a multi-sine signal (or multi-frequency signal) is injected into the load106, the processor302may be further configured to de-compose the voltage reading into its separate frequency components, and accordingly, to determine the impedance response of the load in relation to each frequency component. In still yet other embodiments, the processor302may be configured to correlate the load's impedance response to an applied frequency in order to generate an impedance spectrum of the load over a range of frequencies. In at least some embodiments, the instructions which are executed by the processor302may be transmitted from a remote terminal, and received by the processor302via communication module306. In other embodiments, the processor302may be pre-configured with specific instructions. The pre-configured instructions may be executed in response to specific events or specific sequences of events, or at specific time intervals. Memory304may be, for example, a non-volatile read-write memory which stores computer-executable instructions and data, and a volatile memory (e.g., random access memory) that may be used as a working memory by processor302. In various embodiments, the memory304may be used to store determinations made by the processor302in respect of the impedance response of the load106for particular frequencies (or frequency components) of load analysis signals that are injected therein. Communication module306may be configured to send and receive data, or information, to various components of the load impedance determination system100. For example, as previously explained, the communication module306may receive data from one or more of sensors222and voltage sensor110of the system100. In other cases, the communication module306may be configured to transmit instructions to the variable DC generator220and/or the variable AC generator228. Accordingly, communication module306can be configured to provide two-way bi-directional communication. In still other embodiments, the communication module306may be configured to send and receive data to a remote terminal. For example, the communication module306may transmit to the remote terminal the impedance response of the load106to one or more applied load analysis signals. This information may be transmitted in real-time, or near-real time, to allow an operator of the remote terminal to monitor the state and health of the load106and to take immediate corrective action if a fault is detected in the load106. The communication module306may also receive instructions from the remote terminal. For example, an operator of the remote terminal may transmit instructions to modify the number of load analysis signals generated by the AC generator208, the frequencies (or frequency components) of the load analysis signals generated by the AC generator228, and/or the frequency range of the generated load analysis signals. In still other embodiments, the communication module306may transmit and receive data and information from an external controller (not shown) which is coupled to the load106. For example, the external controller may be configured to modify the operation of the load106based on information received about the impedance response of the load106. The communication module306may also transmit impedance information to the external controller in real-time, or near real time. In various cases, the communication module306may, for example, comprise a wireless transmitter or transceiver and antenna. In other cases, the communication module306may be simply configured for wired communication. In various cases, the communication module306may be configured for communication over public or private wired or wireless networks. The controller108may also include a user interface308. The user interface308may be one or more device that allows a user, or operator, to interact with the controller108. For example, the user interface308may have a keyboard or other input device that allows a user to input instructions into the controller108with respect to the operation of the load impedance determination system100. For example, in some cases, the user may input instructions to control the number of load analysis signals generated by the AC generator228, or the frequencies of the load analysis signals generated by the AC generator228(or the frequency components of a mixed-frequency load analysis signal). In other cases, the user may input instructions to control the frequency range of the load analysis signals generated by the AC generator228. In still other cases, the user can control the de-biasing current (IDe-bias) generated by the variable DC generator220. Accordingly, the user interface306may allow direct control of the system100without requiring a remote terminal. In at least some embodiments, the user interface308may also include a display that allows the user to view the determined impedance response of the load106in response to different frequencies of load analysis signals injected into the load106. In some cases, the display may allow the user to view the impedance response of the load in real-time, or near real time, to allow the user to monitor the state and health of the load106, and accordingly, to take immediate corrective action if a fault is detected. The user interface308may further include a graphical user interface (GUI) which facilitates user interaction. Referring now toFIG.4, there is shown an example B-H curve400of the transformer208ofFIG.2A. In particular, the B-H curve shows the magnetic field strength (H) as a function of the magnetic flux density (B) inside of the transformer208resulting from the combined effect of: (a) the DC current (IDC) flowing through the primary winding210of the transformer208, and (b) the “de-biasing” current (IDe-bias) flowing through the secondary winding212. In particular, as shown, the B-H plot stays strictly within the linear region of the transformer's B-H curve, and does not otherwise saturate. Accordingly, the “de-biasing” current (IDe-bias) is able to effectively ensure that the transformer operates in the ideal zone, and does not other wise suffer from a loss of efficiency. This, in turn, allows for effective operation of the load analysis signal generator104to inject AC signals into the load106. Referring now toFIG.5, there is shown a process flow for an example method500for determining the impedance properties of the load106. The method500can be carried out, for example, using processor302of the controller108inFIG.3. At502, the DC current (IDC) flowing across the primary winding210of the transformer208is determined. As stated previously, the DC current (IDC) may be determined using information received from one or more sensors222inFIG.2A. At504, the de-biasing current (IDe-bias) is determined in accordance with Equation (3) and is applied by the variable DC generator220across the secondary winding212of the transformer208. At506, the variable AC generator228, which is coupled to the second secondary winding214, generates one or more load analysis signals (IAnalysis) having different frequencies, or frequency components, for injection into the load106. At508, the voltage and current across the load106may be measured. For instance the voltage may be measured using voltage sensor110, and the current may be measured using current sensor222b. At510, based on the measurements at508, the impedance of the load106may be determined in response to each frequency (or frequency component) of the load analysis signals injected into the load. The present invention has been described here by way of example only, while numerous specific details are set forth herein in order to provide a thorough understanding of the exemplary embodiments described herein. However, it will be understood by those of ordinary skill in the art these embodiments may, in some cases, be practiced without these specific details. In other instances, well-known methods, procedures and components have not been described in detail so as not to obscure the description of the embodiments. Various modifications and variations may be made to these exemplary embodiments without departing from the spirit and scope of the invention, which is limited only by the appended claims.
48,671
11860206
DETAILED DESCRIPTION A load-pull tuner is disclosed herein. The load pull tuner may be used for phased-array system characterization. Such characterization may be used to design phased array transmitters for, for example, large mm-wave active phased-array antennas in high speed 5G backhaul and satellite communication. As recited above, when testing the performance of a phased-array system, it can be challenging to achieving a high reflection coefficient at the probe tip. The high signal attenuation at these frequencies may result in a low reflection coefficient at the probe tips. In order to maximize the load reflection coefficient, the losses in the signal path to the chip pad may be minimized. The tuner of the present invention comprises a transmission line network and dielectrics positionable above the transmission line network. The transmission line network comprises a main transmission line and two stubs connected to the main transmission line, where the two stubs are transmission lines. The main transmission line and the two stubs may be tunable transmission lines. The load-pull tuner may directly connect to a GSG probe. The load-pull tuner may be used at higher reflection coefficients for phased-array system characterization. FIGS.1aand1bdepict a grounded Co-Planar Waveguide (GCPW) line100with a dielectric slab102having a dielectric constant (εr) of 100. The slab102may operate as a perfect magnetic conductor (PMC) wall. The dielectric slab102may be placed on a top side of the GCPW line100as depicted. By varying a distance between the GCPW line100and the dielectric slab102, the propagation constant of the transmission line may be changed. The distance between the GCPW line100and the dielectric slab102is shown as a gap (Δz). It will be appreciated that this structure may be comprehensively analyzed in and employed as a phase shifter. FIG.2adepicts a characteristic impedance (Z0) and electrical length variation (βl) for a distance between the GCPW line100and the slab102of 1 μm to 30 μm (Δz=β1 m to Δz=30 μm) at 30 GHz (analyzed in ANSYS-HFSS).FIG.2ashows that when the distance or gap is 5 μm to 30 μm, the electrical length of the line varies from about 160° to about 322° and the characteristic impedance of the line remains relatively constant.FIG.2bshows an insertion loss of the GCPW line100. The scattering parameter (S21) is −0.5 dB when the gap or distance is 5 μm to 30 μm at 30 GHz (analyzed in ANSYS-HFSS). FIG.3depicts an embodiment of a tuner300. The tuner300may be a double-stub matching network with stubs302. It will be appreciated that the double-stub matching network may be used to maximize the transformation ratio. The tuner300comprises lines TL1, TL2, and TL3, and connectors304and308. The connector304may be a south west male connector which connects to a PNA port306, and the connector308may be a south west female connector which connects to a GSG probe310. It will be appreciated that the connector308can remove extra adaptor loss due to GSG probes. The structure of the tuner300may be designed to connect directly to the GSG probe310. This may allow for the extra cable loss, such as 3-4 dB of cable loss, to be omitted in existing load-pull tuner systems. One or more of the lines TL1, TL2, and TL3may be tunable lines. The lines TL1TL2, and TL3may be made of GCPW lines. The tuner300may then be loaded with a high permittivity dielectric slab. It will appreciated that using tunable GCPW lines in a double-stub matching circuit may transform a 50Ω load into any desired point in the Smith-Chart. This can be achieved by changing the electrical length of the stubs302and/or the line TL2. The effective electrical length of each stub302may change by adjusting the gap between the line TL1or TL3and the dielectric slab. Similarly, the electrical length of the line TL2may be changed by adjusting the gap between the line TL2and the dielectric slab. It will be appreciated that these gaps are similar to the gap or distance described with regard toFIGS.1aand1b. The adjustment means for the gap between the dielectric slab and at least the stubs302is further described below. As depicted inFIG.3, stubs302may be two grounded stubs, such as 7 mm long grounded stubs. As described above, the stubs302may be loaded with the high permittivity dielectric. In an embodiment, the high permittivity dielectric may have a dielectric constant (εr) of 100, and the gap between each stub and the dielectric may vary from 5 μm to 30 μm. The transmission line substrate may be RO4350 with a dielectric constant (εr) of 6.15 and tan δ=0.003. It will be appreciated that the dielectric slab may have different properties and a different constant, the lines may be formed of another suitable transmission line substrate, and/or the gap between the stubs and the dielectric slab may be larger or smaller. FIG.4ashows a response at 30 GHz of the embodiment described above, when lines TL1and TL3are tunable high permittivity dielectric loaded GCPW lines, and TL2is a straight unloaded GCPW line. FIG.5adepicts an embodiment of a double-stub matching network500on an RO4360 substrate. The whole structure of the double-stub matching network500on RO4360 substrate may have a length of 35 mm, may directly connect to a GSG probe and may operate as a tuner. The double stub matching network500may be similar to the tuner300, comprising three lines (TL1, TL2, and TL3), that may be GSPW lines. A dielectric slab may be loaded onto the double stub matching network500such that there is a dielectric for each transmission line of the network. The gap or distance between the lines of the double stub matching network500and the dielectrics may be adjusted. To adjust the distance or gap, an actuator may be used for each dielectric. In an embodiment, a magnetic actuator may be used to move the high permittivity dielectrics up and down as shown inFIGS.5b,5c, and5d. As recited above, there may be an actuator for each line or dielectric. In the system ofFIGS.5a-5d, the double stub matching network500has two tunable lines (TL1and TL3), where each tunable line is loaded with a dielectric, and the gap between each line and dielectric is adjustable by an actuator. In embodiments where the actuators are magnetic actuators, the distance between the dielectric and the GCPW line surface may be adjusted by changing the current in the coils. It will be appreciated that there is an additional dielectric and actuator when the double stub matching network500has three tunable lines (TL1, TL2, and TL3). The actuators may be controlled through a digitally controlled current source.FIG.5edepicts a bottom view of the double-stub matching network500on the RO4360 substrate. FIGS.6a-6ddepict a measured scattered parameter (S11) at various frequencies of the tuner using magnetic actuators shown inFIGS.5a-5e(measured using a two-port test). A maximum reflection coefficient (|┌|) of 0.71 is measured at 24.6 GHz as shown inFIG.6a. A maximum reflection coefficient (|┌|) of 0.35 is fully covered at 35 GHz as illustrated inFIG.6cand a resistive impedance of 15Ω to 250Ω is achieved at 40 GHz. It will be appreciated that with only two tunable lines in the tuner ofFIGS.5a-5e, the coverage is limited to a maximum reflection coefficient (|┌|) of 0.7 and may not cover the entire Smith-Chart. It will be appreciated that in cases of two tunable lines, the coverage of the Smith-Chart is limited and there are certain blind spots (seeFIG.4a). This may be due to a problem with double-stub tuners where all the loads may not be matched for fixed stub spacings. To avoid such limitations, the straight unloaded GCPW line TL2, described above in the double stub matching network, may be replaced with a tunable GCPW line. It will be further appreciated that higher loss of the conductor due to surface roughness, and the alignment inaccuracy in moving the dielectrics are among the factors that may also limit the Smith-Chart coverage of the tuner with two tunable lines. More coverage may be achieved by using a GCPW line with lower loss substrate, a conductor with less surface roughness, and a carefully optimized design. As described above, in order to maximize the coverage of the Smith-Chart, the double stub matching network may comprise a tunable line for TL2. In such a case, the tunable lines TL1, TL2, and TL3are made of GCPW lines which are loaded with a high permittivity dielectric slab. In such an embodiment, there is a dielectric for each tunable line (TL1, TL2, and TL3) and there may be an actuator for each dielectric to adjust the gap between each line and dielectric. FIGS.4b,4c, and4ddepict a reflection coefficient at 20, 30 and, 40 GHz, respectively, of an embodiment of a tuner with three tunable lines (TL1, TL2, and TL3). It will be appreciated, that as depicted inFIGS.4b-4d, a Smith-Chart coverage of |┌in|<0.8 may be achieved over a wide bandwidth using tunable GCPW lines TL1, TL2, and TL3. It will be further appreciated that the substrate and ohmic loss of the line may limit the maximum achievable reflection coefficient. As described, a tuner having three tunable lines (TL1, TL2, and TL3) may cover the Smith-Chart with a reflection coefficient (|┌|)<0.8 over a wide bandwidth. It will be appreciated that a reflection coefficient of 0.7 (Voltage Standing Wave Ratio (VSWR)=5.6) and 0.6 (VSWR=4) is measured at 24.6 GHz and 40 GHz, respectively. The above described tuner with three tunable lines TL1, TL2, and TL3may be used as a Ka-band load-pull tuner. The tuner may have a low-cost, and may be a compact solution for Ka-band load-pull tuners. The tuner may be used for phased-array system characterization at higher reflection coefficients. A π-network with three tunable transmission lines may be used to cover the entire Smith-Chart over a wide bandwidth. The system may cover the Smith-Chart with a reflection coefficient at the load (┌L)<0.9. It will be appreciated that the tuner having three tunable lines TL1, TL2, and TL3may be used for testing a phased-array transmitter chip under impedance mismatch conditions. The above described tuner may be used for microwave, mm-wave and sub-millimeter-wave systems. The compact design allows such a tuner with three tunable lines to be used for on-wafer probing and may remove any extra cable in the path of the signal. It would be appreciated by one of ordinary skill in the art that the system and components shown in the figures and described herein may include components not shown in the drawings. For simplicity and clarity of the illustration, elements in the figures are not necessarily to scale, are only schematic and are non-limiting of the elements structures. It will be apparent to persons skilled in the art that a number of variations and modifications can be made without departing from the scope of the invention as described herein.
10,888
11860207
DETAILED DESCRIPTION There may be scenarios where a component installed in a computing device is not the same as an expected component for the computing device. For example, the computing device may be tampered with during the manufacturing process, before distribution to an end user or even after the end user has received the computing device. There may be various opportunities in a supply chain or after distribution to an end user for an entity to tamper with the computing device or supply a compromised or unauthorized component for installation in the computing device. For example, on a production line, a compromised component may be intentionally or unintentionally installed on the computing device. In another example, an attack could be performed on the computing device in an untrusted environment such as during transit from a manufacturing facility to an end user, or even while the computing device is left unattended (e.g., by an end user). FIG.1depicts a flowchart of a method100, which may be a computer-implemented method, of determining certain information about a computing device. The method100may be implemented by an entity such as an end user, manufacturer or other verifying entity. The method100may be implemented using processing circuitry of, for example, a personal computer, dedicated processing apparatus, server, cloud-based service or any other computing device. The method100comprises, at block102, receiving an indication of an electrical parameter associated with at least part of a computing device. In some examples, the electrical parameter may be measured at a specified location associated with the computing device. In some examples, the electrical parameter may comprise a quantity such as capacitance, inductance, impedance, etc. The indication may comprise or be indicative of an electrical value such as a voltage, charge and/or current value, etc. Measurement of the electrical value (e.g., over time) may be used to determine the electrical parameter. In some examples, a component (e.g., capacitor, inductor, resistor, etc) may be associated with a certain electrical parameter (e.g., capacitance, inductance, impedance, etc). The electrical parameter may vary depending on the electric field surrounding the component and/or an electric field generated by component itself. The component may be associated with a certain electrical value (e.g., voltage and/or current, etc measured across the component). Thus, the electrical value may be a measurable quantity (associated with the component) which can be used to determine the electrical parameter (e.g., the capacitance, inductance, impedance, etc). In some examples, measuring the electrical value may comprise measuring at least one of voltage, current, voltage and/or current evolution over a specified time, a charge/discharge time, resonance frequency and/or a variation of the voltage of a polarized component (e.g., a capacitor), etc. In some examples, receiving the indication of the electrical parameter may comprise receiving the value (or an indication thereof) of the electrical parameter (e.g., capacitance value, inductance value, impedance value, etc). In some examples, receiving the indication of the electrical parameter may comprise receiving the electrical value (or an indication thereof), which can be used to determine the value of the electrical parameter. In some examples, the electrical parameter may be associated with a component of the computing device itself (e.g., the component may be part of the circuitry of the computing device). In some examples, the electrical parameter may be associated with a sensor or other device for obtaining certain information regarding the computing device (e.g., the component may not be part of the circuitry of the computing device; instead the component may be part of the sensor or other device). As will be explained in more detail below, the received indication of the electrical parameter may be used to determine or otherwise infer certain information regarding the computing device. The method100comprises, at block104, determining, using processing circuitry, whether or not the indication is indicative of an expected electric field distribution associated with a specified hardware configuration for the computing device. As will be explained in more detail below, the specified hardware configuration for the computing device may affect the electric field distribution. Thus, in some cases, a variation in the hardware configuration for the computing device (e.g., a variation in the number, location, type and/or distribution of electronic components in the computing device) may influence the electric field distribution. A variation in the electric field distribution may, in some cases, affect the electrical parameter. For example, if a component (that is associated with the electrical parameter) is at a certain location relative to the computing device, a variation in the hardware configuration of the computing device may affect the electric field observed by the component. In other similar words, if the component observes a certain electric field at its location, the value of the electrical parameter may be affected by the electric field at its location. The electric field observed at a certain location may be dependent on the hardware configuration of the computing device. Thus, the electric field distribution may depend on the hardware configuration of the computing device. The electrical parameter associated with the component may be influenced by the hardware configuration since the electrical parameter observed at the component may be influenced by any variation in the electric field (which may be dependent on the hardware configuration). A measurement of the electrical parameter at different locations relative to the computing device may be indicative of the electric field distribution associated with the hardware configuration for the computing device. The indication received in block102may be used to determine whether or not the electric field at a certain location is indicative of the expected electric field distribution. For example, for the specified hardware configuration, a certain electric field may be expected to be observed at the certain location. If this certain electric field is observed at this certain location, the measurement of the electrical parameter at this location may be indicative that the electric field distribution is as expected. Further measurements of the electrical parameter at other locations may be used to provide confirmation that the electric field distribution is as expected. In this case, a determination may be made that the computing device has the specified (e.g., expected) hardware configuration. However, if the measurement of the electrical parameter at the location is indicative of a different electric field distribution (e.g., an unexpected electric field distribution), this may be indicative that the computing device does not have the specified hardware configuration. For example, if the hardware configuration is not as expected, the electric field distribution may be different to what is expected. In other similar words, the received indication of the electrical parameter may be indicative that the hardware configuration for the computing device being tested is not the same as the specified (e.g., expected) hardware configuration for the computing device. Thus, in some examples, the indication of the electrical parameter may be used to determine whether or not a computing device under test has the specified hardware configuration. In some examples, the specified hardware configuration for the computing device may refer to the number, type and/or distribution of hardware (e.g., electronic hardware) components used in the computing device. A component (e.g., a hardware component) of the computing device may affect the electric field distribution. In some examples, the component may itself generate an electric field, which may contribute to the overall electric field distribution associated with the computing device. In some examples, the component may affect the electric field generated by another component (e.g., of the computing device or of another device such as a sensor). Examples of hardware components include, for example, electrical conductors, resistors, capacitors, inductors, logic gates, microprocessors and/or any other component capable of generating an electric field or influencing an electric field. The hardware components may affect the electric field in different ways depending on whether or not the computing device is powered on or off. In some examples, the circuitry of the computing device may generate an electric field e.g., due to the flow of current through the circuitry and/or distribution of charge throughout the computing device. In some examples, the circuitry of any other components or devices (such as a sensor described below) may generate an electric field. In some examples, the circuitry of the computing device and/or the circuitry of any other components or devices may affect the electric field distribution associated with the specified hardware configuration for the computing device. The number, type and/or distribution of components (including electrical conductors) of the computing device may affect the electric field distribution associated with the specified hardware configuration for the computing device. The circuitry of the computing device and/or the electric field generated by such circuitry may affect certain electrical parameters observed at certain locations. In some examples, the location may refer to a component of the computing device itself. In some examples, the location may refer to a component of another device such as a sensor not part of the circuitry of the computing device itself. A determination of the electrical parameter at a certain location associated with the computing device may be indicative of the electric field distribution for the computing device. In some examples, the electric field distribution may comprise an electric field contribution due to at least one of the circuitry (e.g., including a hardware component) of the computing device and the circuitry of any other components or devices (such as a sensor) in the proximity of the computing device. Since the number, type and/or distribution of components of the computing device may affect the electric field distribution, the electric field distribution may be indicative of the type of components in the computing device and/or the distribution of these components. If the computing device is tampered with (e.g., if a hardware change occurs), this tampering may affect the electric field distribution. For example, if any component of the computing device is new or compromised in some way, this component may affect (e.g., modify) the electric field distribution. This change to the electric field distribution may be detectable via the indication of the electrical parameter. An entity such as a device manufacturer, end user or other verifying entity may use the indication to determine whether or not the computing device has been tampered with (e.g., if the hardware configuration of the computing device is not as expected). In some examples, the method100may enable a determination to be made regarding whether or not the computing device has been tampered with. Tampering may take various forms such as: adding a compromised component, modifying an existing component, replacing an existing component with a compromised component. Thus, in some examples, the method100may enable a determination to be made regarding whether a change to the hardware of the computing device has occurred. The method100may allow a verifying entity such as an end user or device manufacturer to be confident that the provenance and/or integrity of the computing device is as expected. The method100may facilitate the detection (e.g., in real-time or after the event) of a tamper attempt. Accordingly, another tamper attempt indicator (e.g., on the device packaging) may not be used, or the method100may be used alongside the other tamper attempt indicator to provide additional assurance to the verifying entity. FIG.2shows an example system200for implementing certain methods described herein (e.g., method100). In this example, the system200enables certain information to be determined about a computing device202(e.g., a printed circuit assembly (PCA)). For example, the system200may enable a determination to be made as to whether or not the computing device202has been tampered with (e.g., whether or not a change to the hardware of the computing device202has occurred). The example system200comprises a sensor204, which in this example comprises a capacitor array206, for measuring an electrical parameter (in this example, capacitance) at specified locations associated with the computing device202. A capacitor206aof the capacitor array206is positioned at a location associated with the computing device202. The capacitor206amay be sensitive to the local electric field (e.g., generated by the capacitor206aitself and/or generated or otherwise influenced by the circuitry of the computing device206), which may be affected by the number, type and/or distribution of components in the computing device202proximal to the capacitor206a(although in some cases components distal to the capacitor206amay affect the local electric field observed by the capacitor206a). For example, the capacitance of the capacitor206amay be affected by the number, type and/or distribution of components in the computing device202. Another capacitor206bof the capacitor array206may be affected by the type and distribution of components in the computing device202. In some examples, the capacitor array206itself may generate an electric field which may be affected by the hardware configuration of the computing device202. In this case, any change to the hardware configuration may be detectable by causing a change to the electric field generated by the capacitor array206(and hence, causing a detectable change to the capacitance in at least one of the capacitors206). In some examples, a change to a hardware configuration of the computing device202may be detectable (by the capacitor array206) in the electric field generated by (e.g., operation of) the capacitor array206. In some examples, a change to the hardware configuration may influence the electric field generated by the capacitor array206. For example, the number, type and/or distribution of components in the computing device202may influence the electric field observed by the capacitor array206(e.g., irrespective of whether the computing device202is powered on or off). In some examples, if the computing device is powered on, the computing device202may itself generate an electric field (e.g., a parasitic electric field), which may affect the electric field observed by the capacitor array206. A change to the capacitance at any of the capacitors of the capacitor array206may in some cases be indicative of the computing device202having been tampered with. In some cases, a change to the capacitance at a combination of the capacitors of the capacitor array206may be indicative of the computing device202having been tampered with. The change to the capacitance may be determined based on a change to a measured electric value (e.g., current, voltage) associated with the capacitor204a,204bthat observes such a change. Thus, the measured electric value may provide the indication of the electrical parameter (i.e., capacitance in this example) measured at a specified location associated with the computing device202. The sensor204is depicted as spatially separated from the computing device202inFIG.2although the sensor204may or may not be spatially separate from the computing device202. In some examples, the sensor204may be provided as circuitry that is distinct/separate from the circuitry of the computing device202itself (e.g., the capacitor array206may be provided as part of a different PCA to the computing device202). For example, the sensor204may be positioned in an appropriate location relative to the computing device202in order to perform measurements. In another example, the sensor204may be installed as part of the same PCA as the computing device202but using separate circuitry to the circuitry of interest in the computing device202(e.g., the sensor204may be provided as part of a different layer of the PCA). In some examples, the sensor204may be installed in situ for performing the measurements (e.g., the sensor204may be permanently installed alongside the computing device202within or combined with the packaging of the computing device202). In some examples, the sensor204may be provided as part of the circuitry of the computing device202itself (e.g., the capacitor array206may be embedded with the computing device202itself and use the same circuitry as the circuitry of interest of the computing device202). For example, the sensor204and the computing device202may share the same platform e.g., as part of the same PCA (or a layer thereof). If the sensor204can be installed in situ with the computing device202, this may provide the ability to provide measurements for a period of time, which may increase the likelihood of detecting a tamper attempt (e.g., during manufacturing or even after supply to an end user). These measurements may be performed at any time using any specified schedule to increase the likelihood of detecting any tamper attempts. A verifying entity may rely on measurements provided by the sensor204itself and, in some examples, may not use any other dedicated equipment for detecting a tamper attempt. In some examples, the sensor204may be positionable adjacent the computing device202(e.g., temporarily by a verifying entity) in order to take measurements while the sensor204is in situ. In some examples, the sensor204may comprise a dedicated device for performing the measurements. For example, the sensor204may comprise a touchscreen (e.g., a capacitive touchscreen such as may be used in a smart phone or other touch-sensitive device for receiving user input). Capacitive touchscreens may be relatively inexpensive and readily deployable to obtain measurements. The technology used in capacitive touchscreens may be combined with a PCA implementing the functionality of the computing device202. For example, capacitors used in touchscreens may be installed at appropriate positions within the PCA (e.g., connected via dedicated circuitry lanes which are connectable to processing circuitry for obtaining measurements associated with the capacitors). In some examples, circuitry for obtaining measurements (e.g., from a capacitor array206) may already be implemented by the computing device202. In some examples, the touchscreen may be positioned proximal to (e.g., above or adjacent to) a computing device such as a motherboard in order to take measurements from such a computing device (e.g., to determine whether or not the computing device has been tampered with). In some examples, the touchscreen may be controlled by the computing device (e.g. the computing device may control what is displayed on the touchscreen). In some examples, the touchscreen may be used to provide user input to the computing device (e.g., a user touch may provide instructions for execution by the computing device). In some examples, a touchscreen of a user device may be used in certain scenarios to determine whether or not the computing device of the user device has been tampered with. For example, a user device such as a clamshell device (e.g., a folding phone, tablet, notebook or laptop) may comprise a touchscreen that can be positioned proximal to the computing device of the clamshell (such as when the clamshell device is closed). While in this closed position, the touchscreen may be used to take measurements which can be used to determine whether or not the computing device of the clamshell device has been tampered with. In another example, the touchscreen of, for example, a smartphone or tablet may be proximal to the computing device of the smartphone/tablet at all times (due to the design of the smartphone/tablet). In any case, the touchscreen of the user device may be sensitive to any changes in the hardware configuration of the computing device. In some examples, isolated electrical contacts (e.g., circuitry lanes) in the PCA may form a capacitor-like structure. Measurements of electrical values associated with such electrical contacts may provide the indication. Thus, in some examples, dedicated structures of the PCA such as isolated electrical contacts may provide the indication without utilizing any more expensive components such as dedicated capacitors. In some examples, a capacitive touchscreen may comprise a capacitor array206. In use, the sensor204may be positioned at an appropriate location relative to the computing device202and measurements taken as needed. The sensitivity of the sensor204such as a capacitive touchscreen may be varied according to a specified need for the computing device202. For example, if a specified sensitivity is needed for detecting variations in the electric field distribution, the sensor204may be appropriately selected accordingly (e.g., by selecting a certain number, type and/or distribution of capacitors206a,206b). If the sensor204is in the form of a capacitive touchscreen, the touchscreen may be constructed according to the application and/or specified sensitivity. For example, a certain layer (e.g., a protective layer) of the touchscreen may be omitted and/or the circuitry of the touchscreen could be modified to increase sensitivity. In some examples, the capacitors206a,206bof the capacitor array206are distributed at specified positions relative to the computing device202(e.g., the capacitors206a,206bmay be distributed at least partially throughout or adjacent the computing device202). In some examples, the position of the capacitors206a,206bmay be fixed (e.g., if the sensor204is installed in situ with the computing device202or as part of its packaging, for example). In some examples, the position of the capacitors206a,206bmay not be fixed (e.g., if the sensor204is not installed as part of the computing device202such as may be the case if a user can position the sensor204as needed). In some examples, at least some of the capacitors206a,206bmay be arranged in a regular pattern (e.g., a grid of capacitors206a,206bwith a regular or repeated spacing therebetween). In some examples, at least some of the capacitors206a,206bmay be arranged in an irregular pattern (e.g., a grid of capacitors206a,206bwith a random, non-repeating or non-uniform spacing therebetween). The particular pattern of capacitors206a,206bmay be such as to increase a signal-to-noise ratio when measuring the electrical parameter. For example, a particular component of interest of the computing device202may affect the electric field distribution in a relative insignificant manner. The number, type and/or position of certain capacitors206a,206relative to this particular component may be such that any change to the electric field distribution (e.g., due to tampering) may be detected even though the change to the electric field distribution may be relatively insignificant if a tamper event occurs. In some examples, a capacitor206amay have a different property (e.g., different capacitance) to another capacitor206bof the capacitor array206. The property of the capacitors206may be selected according to a specified sensitivity for the specified location of the capacitor206. For example, where more sensitivity is needed for a specified location, a capacitor206with a different capacitance may be selected for the specified location (e.g., so that a detectable change in capacitance may be observed if a tamper event occurs). The system200further comprises a measurement module208for measuring the electrical parameter (e.g., to measure capacitance). The measurement module208is communicatively coupled to the sensor204to receive the indication therefrom and/or facilitate measurement of the electrical parameter. In some examples, the measurement module208is to measure an electrical value (e.g., voltage and/or current evolution over a specified time, a charge/discharge time, resonance frequency and/or a variation of the voltage of a polarized capacitor) which can be used to determine the capacitance of a capacitor206a,206bof the sensor204. In such examples, the measurement module208may comprise circuitry for determining the voltage and/or current associated with a particular capacitor206a,206bof the sensor204. In some examples, the measurement module208may obtain measurements continually (e.g., regularly or according to a specified time interval or schedule). In some examples, the measurement module208may obtain measurements upon demand (e.g., the measurement may be initiated upon request by another entity such as a user, scheduler or management component). In some examples, a measurement may be performed by the measurement module208when a certain condition is met (e.g., if the computing device202may enter a particular state (e.g., power state) and/or if the sensor204is in a position suitable for measurements to be taken). In some examples, the measurement module208may perform measurements with a specified sensitivity depending on certain factors. In some examples, the purpose of the capacitor206a,206bof interest may influence the how sensitive the capacitor206a,206bis to be (or what is configuration is to be in relation to the computing device202) in order to detect a change in capacitance. For example, the capacitance of the capacitor206a,206band/or its position and/or sensing region size may influence how sensitive the particular capacitor206a,206bis to a perturbation to the electric field distribution. Thus, in some examples, if a particular hardware component does not significantly affect the electric field distribution, a capacitor may be positioned proximal to the hardware component and/or its sensitivity may be selected (e.g., its capacitance may be selected) such that any changes to the electric field distribution due to tampering of the particular component may be detected even if the perturbation to the electric field distribution is relatively small. Similarly, if a particular hardware component significantly affects the electric field distribution, a capacitor may be positioned distal to the hardware component and/or its sensitivity may be selected (e.g., reduced compared with the previous example) providing any changes to the electric field distribution due to tampering of the particular component can still be detected. In some examples, the capacitor206a,206bof interest may be dedicated to detecting any new components installed in the computing device202. In some examples, the capacitor206a,206bof interest may be dedicated to detecting a proximity of a component being tampered with. Any combination of the above examples may be used to influence the sensitivity of the measurement. In some examples, the design of the capacitor array206may be selected according to a risk factor such as whether a particular component is vulnerable to tampering, in which case, the capacitor(s) selected for detecting a tamper attempt may be positioned and/or specified as appropriate according to the risk factor. In some examples, the state of the computing device202may influence the number and/or type (e.g., sensitivity) of the measurement being taken, for example, depending on a level of risk associated with the state or a change of state. In some examples, the state of the computing device202may influence a model (e.g., as described herein) used to establish whether or not the indication is indicative of the expected electric field distribution. In some examples, the model may be adjusted and/or a different model may be used depending on the particular state (e.g., power state) of the computing device202. For example, if a particular state (or change in state) is associated with a higher risk of the computing device202being tampered with, the model may be adjusted and/or a different model to reflect this higher risk and to increase the likelihood of identifying a potential attack. In some examples, a power and/or activity of the computing device202may affect the number and/or type of the measurement to be obtained (e.g., by the sensor204). For example, if the computing device202is powered off for some reason (e.g., unexpectedly or due to a reboot), the number and/or type of the measurement performed may be adjusted appropriately (e.g., to a higher than normal sensitivity in case the computing device202is more likely to have been tampered with due to its particular state). In another example, a sensitivity and/or accuracy of the measurement may be increased during power off since an electric field (e.g., parasitic electric field) generated by the hardware of the computing device202during its powered-on operation may be reduced or minimized during power off. Thus, in some examples, the sensor204may be able to detect a change to the electric field distribution associated with the computing device202during power off due to, for example, a hardware component not being in an expected position and/or not being an expected size, and the electric field distribution (e.g., generated by the sensor204) being modified by this unexpected hardware configuration. In some examples, the level of sensitivity of a measurement obtained by the sensor204may be based (e.g., dynamically) based on a present and/or previous state of the computing device202. In some examples, the level of sensitivity may be informed by the model used to determine whether or not the indication is indicative of the expected electric field distribution. In some examples, the position of the capacitor206a206bof interest relative to the computing device202may influence the specified sensitivity. For example, if the position of the capacitor206a,206bis thought to be associated with a decreased likelihood of detecting a change to the electric field distribution, the sensitivity of the measurements performed may be appropriately selected (e.g., by causing an increase to the sensitivity of the capacitor206a,206bof interest such as by adjusting the circuitry associated with the capacitor206a,206bof interest as appropriate). The system200further comprises a determining module210(e.g., comprising processing circuitry) for determining whether or not the indication is indicative of an expected electric field distribution for the computing device202. In some examples, the determining module210may implement block104of the method100. The determining module210may receive the indication from the measurement module208. Thus, the determining module210may use a measurement obtained by the measurement module208to determine whether or not a component of the computing device202has been tampered with. In some examples, the system200further comprises a model212(e.g., a memory accessible to the determining module210comprising information regarding an expected electrical parameter and/or electric field distribution for the computing device202), which the determining module210can use to determine whether or not the computing device202has been tampered with. In some examples, the model212comprises an electric field distribution model such as described herein. In some examples, the model212may be used to compare an expected measurement of an electrical parameter at a specified location relative to the computing device202with an expected (e.g., initial) measurement of the electrical parameter. For example, the expected measurement of the electrical parameter may have been obtained at a trusted time during the manufacture of the computing device202. This expected/initial measurement may be used as a baseline to compare with subsequent measurements of the electrical parameter. In some examples, the model212may be used to detect an unexpected object or component in the proximity of the computing device202. In some examples, the determining module210may be capable of detecting a difference or discrepancy between the data provided by the model212and the indication. Where such a difference or discrepancy is detected, subject to certain criteria, the determining module210may determine that the computing device202has been tampered with. However, if no such difference or discrepancy is detected, the determining module210may determine that the computing device202has not been tampered with. The determining module210may provide a notification regarding a status or trustworthiness of the computing device depending on whether or not a determination has been made that the computing device202has been tampered with. In some examples, data provided by the model212may be used for a comparison between the indication and an expected value for the electrical parameter (i.e., as provided by the data). Where the difference between the indication and the expected value for the electrical parameter is such that a specified threshold is exceeded, this may indicate that the computing device202has been tampered with. In some examples, the model212may be based on a plurality of parameters (e.g., electrical parameters) from a set of measurements. For example, multiple measurements (e.g., capacitance measurements) may be obtained from the capacitor array206. In some cases, a tampering attempt may not be directly detectable from a single measurement but instead may be detectable by analyzing the multiple measurements from the capacitor array206(e.g., multiple measurements from a single capacitor206aat different times and/or measurements from a plurality of capacitors206a,206bat the same time or at different times). For example, the model212may implement a machine learning approach to analyze the data provided by the measurement module208. For example, the machine learning-based model212may have been trained using measurement data indicative of whether or not a computing device202has been tampered with such that, in use, the determining module210may use the model212to predict whether or not a certain measurement or set of measurements is indicative of a tamper attempt. In some examples, the model212may be used to infer a probability that a tamper attempt has occurred. In some examples, the determining module210may detect a non-conform perturbation of an electric field surrounding a capacitor206a(or a plurality of capacitors206a,b) of the capacitor array206. In some examples, the model212may provide or comprise information regarding an electric field that conforms to the expected electric field associated with a specified hardware configuration for the computing device202. A certain perturbation to the electric field may conform to the model (i.e., the specified hardware configuration for the computing device202may be as expected, for example, the computing device202has not tampered with). However, if the perturbation does not conform to the model212(i.e., ‘non-conform’), this may indicate that the electric field does not conform to the expected electric field associated with the specified hardware configuration for the computing device202(i.e., the specified hardware configuration for the computing device202may not be as expected, for example, the computing device202has or may have been tampered with). An example of a non-conform perturbation may be if the capacitance of the capacitor206aor a plurality of capacitors206changes by a certain value, for example, by exceeding a threshold value. In some examples, a non-conform perturbation may be detected by analyzing multiple measurements from the capacitor206a(e.g., at different times) or the plurality of capacitors206a,b(e.g., at the same time or at different times). The system200further comprises a response module214communicatively coupled to the determining module210. The response module214may take certain action depending on whether or not the determining module210determines that a tamper attempt has or may have taken place. Various possible actions may be taken by the response module214, as will be explained in more detail below. In some examples, the response module214may send a notification to a user, management infrastructure or other verifying entity with any information about the attack/potential attack on the computing device202. This notification may comprise information about the attack such as the obtained measurement(s) indicative of the attack, the model212and/or the state of the computing device202(e.g., before, during or after the attack). In some examples, the response module214may cause a restriction to the functionality of the computing device202. In some examples, the response module214may cause the computing device202to be locked, powered off or otherwise disabled (or restricted in function) until a management command is received by the response module214to permit increased (or normal) functionality of the computing device202. In some examples, the response module214may cause the computing device202to be locked, powered off or otherwise disabled (or restricted in function) until the determining module210determines that the measurement(s) provided by the measurement module208are indicative of an expected value for the measurement(s). For example, the determining module210may determine that an obtained measurement is indicative of compliance with the model212(e.g., the difference between the measurement and an expected value may be below a threshold). In some examples, the response module214may implement any combination of the above examples to reduce or restrict the functionality of the computing device202in accordance with a particular protocol to reduce or manage the risk associated with a (potentially) compromised computing device202. Although depicted as separate modules inFIG.2, any of: the measurement module208, determining module210, model212and response module214may be combined with any other of: the measurement module208, determining module210, model212and response module214. Thus, processing circuitry for implementing the method100may comprise processing circuitry associated with any combination of: the measurement module208, determining module210, model212and response module214. In some examples, any of the modules described above (e.g., the measurement module208, determining module210, model212and response module214) comprises at least one dedicated processor (e.g., an application specific integrated circuit (ASIC) and/or field programmable gate array (FPGA), etc) for implementing the functionality of the module. In some examples, the module comprises at least one processor for implementing instructions which cause the at least one processor to implement the functionality of the module described above. In such examples, the instructions may be stored in a machine-readable medium (not shown) accessible to the at least one processor. In some examples, the module itself comprises the machine-readable medium. In some examples, the machine-readable medium may be separate to the module itself (e.g., the at least one processor of the module may be provided in communication with the machine readable medium to access the instructions stored therein). FIG.3depicts a flowchart of a method300, which may be a computer-implemented method, of determining certain information about a computing device. The method300may be implemented by an entity such as an end user, manufacturer or other verifying entity. The method300may be implemented using processing circuitry of, for example, a personal computer, dedicated processing apparatus, server, cloud-based service or any other computing device. The method300may be implemented in conjunction with or as part of the method100. Further reference is made to features ofFIG.2. In this example, the method300further comprises the blocks102,104of the method100. In some examples, determining whether or not the indication is indicative of the expected electric field distribution associated with the specified hardware configuration for the computing device (i.e., block104of method300) comprises comparing the indication of the electrical parameter with a previously-obtained indication of the electrical parameter. The method300may further comprise determining whether or not any deviation between the indication and the previously-obtained indication is indicative of an unexpected modification of the hardware configuration for the computing device. The previously-obtained indication may be accessible (e.g., via the model212ofFIG.2) to the determining module210. Thus, the determining module210may receive the indication (e.g., when testing whether or not the computing device202has an expected provenance) and compare this received indication with the previously-obtained indication (e.g., which may have been obtained at a trusted point in the manufacturing process and stored in memory or used to generate the model212). In some examples, determining whether or not the indication is indicative of the expected electric field distribution associated with the specified hardware configuration for the computing device (i.e., block104of method300) comprises determining whether or not the indication meets a specified criterion with respect to the expected electric field distribution. Where the specified criterion is met, a provenance of the computing device may meet an expectation. Where the specified criterion is not met, the provenance of the computing device may not meet the expectation. In some examples, the specified criterion may comprise a threshold which, if exceeded, may be indicative of a potential tampering attempt (e.g., due to a change in the electric field distribution). For example, the provenance of the computing device202may meet an expectation (i.e., the computing device has not been tampered with) when the specified criterion is that the difference between the indication and the previously-obtained indication is below a threshold and/or if the analysis of the indication provides information that a change to the electric field distribution is not indicative of a tamper attempt (e.g., based on data provided by the model212). The provenance may not meet the expectation when the specified criterion is that the difference between the indication and the previously-obtained indication meets or exceeds the threshold and/or if the analysis of the indication provides information that a change to the electric field distribution is indicative of a tamper attempt (e.g., based on data provided by the model212). In some examples, the method300comprises, at block302, causing a sensor (e.g., sensor204) to perform a measurement of the electrical parameter. For example, the measurement module208may be caused to obtain the measurement under various circumstances as described below. In some examples, the sensor may be caused to perform the measurement at least one of: repeatedly; upon request by a verifying entity; and in response to a specified condition being met. In some examples, the specified condition may refer to a computing device state being changed e.g., due to a power off event or reboot. In some examples, the specified condition may refer to a determination being made that the computing device has been moved to a different location and/or a different entity now possesses the computing device. In some examples, causing the sensor to perform the measurement with a specified sensitivity may be based on at least one of a purpose of the sensor, a state of the computing device and a location of the sensor relative to the computing device. In some examples, the sensitivity of the measurement may depend on any of the certain factors described above. In some examples, receiving the indication (e.g., block102of the method300) comprises receiving an indication of an electrical parameter measured at a plurality of locations associated with at least the part of the computing device. For example, the electrical parameter may be measured at multiple locations relative to the computing device. Where, in some examples, the sensor comprises a capacitor array, each capacitor may be associated with a different location and may provide the measurement at its particular location. In some examples, the measurement of the electrical parameter at each location may be performed by moving the sensor to different locations in order to take a measurement at the location (e.g., if the sensor comprises a single capacitor or other electrical component capable of detecting a variation in the electric field). In some examples, determining whether or not the indication is indicative of the expected electric field distribution associated with the specified hardware configuration for the computing device may comprise using a model (e.g., an ‘electric field distribution model’) to determine whether or not the indication of the electrical parameter measured at the plurality of locations is indicative of the expected electric field distribution. In some examples, the model is based on a previously-obtained indication of the electrical parameter measured at the plurality of locations (e.g., ‘experimental data’). For example, the model may be constructed using data from previous measurements of the electrical parameter at the plurality of locations. In some examples, the model is based on a previously-obtained indication of the electrical parameter measured at the specified location. In some examples, the model is based on information (e.g., ‘theoretical knowledge’) regarding how a hardware component of the computing device influences the expected electric field distribution. For example, the information may relate to how a hardware component may perturb or otherwise influence an electric field surrounding the hardware component. In another example, the information may relate to the expected electric field generated by the hardware component during operation (e.g., when powered up). In some examples, any way in which the hardware component affects the expected electric field distributed may be modeled (e.g., using a previously-obtained indication and/or the information). Thus, the model may be based on at least one of: experimental data and theoretical knowledge regarding how a hardware component contributes to or otherwise affects an electric field. In some examples, the method300comprises, at block304, causing a notification to be issued in response to a determination being made that the indication is inconsistent with the expected electric field distribution. For example, the indication may be inconsistent with the expected electric field distribution if the specified criterion is not met and/or if the difference between the indication and a previously-obtained indication meets or exceeds a threshold. In some examples, causing the notification to be issued may cause a certain action to be taken. For example, any one or combination of the following example actions may be taken. An example action may be that the notification is to be sent to a verifying entity. Another example action may be to restrict functionality of the computing device until the verifying entity permits an increase in functionality of the computing device. Another example action may be to restrict functionality of the computing device until a determination is made that the indication is indicative of the computing device functioning as expected. FIG.4is a schematic illustration of an example apparatus400for implementing or at least partially facilitating certain methods described herein (e.g., method100,300). In some examples, the apparatus400comprises a computing device and/or be regarded as an example of a computing device such as computing device202. In some examples, the apparatus400may be used to determine certain information regarding a computing device that is distinct/separate from the apparatus400itself. The apparatus400comprises a sensor402to measure a characteristic of an electric field distribution (e.g., an electrical parameter associated with the electric field distribution) influenced by a hardware component of a computing device (e.g., of the apparatus400itself or of a distinct/separate computing device). In some examples, the sensor402may comprise a sensor (e.g., sensor204ofFIG.2) that is embedded, combined or otherwise integrated with the processing circuitry (e.g., the circuitry of the computing device) and/or packaging of the apparatus400. In some examples, the sensor402may comprise a component that is capable of measuring the electrical parameter or otherwise sending the indication of the electrical parameter to another module (e.g., the measurement module208or the determining module210ofFIG.2). In some examples, the sensor402may be sensitive to a change in the electric field distribution (e.g., due to a hardware component of the computing device). For example, the size, position and/or type of the hardware component may affect the electric field distribution. A change to the hardware component (e.g., due to tampering of the hardware component) may influence the electric field distribution, which may be detectable by the sensor402. In use of the apparatus400, the sensor402sends an indication of the characteristic to a verifying entity. Sending the indication to the verifying entity may allow the verifying entity to determine, compared with a previously-determined electric field distribution associated with the computing device, whether or not a change in the electric field distribution has occurred. Thus, if a change has occurred, the verifying entity may determine whether or not the computing device has been tampered with. In some examples, the sensor402may send the indication to or via the measurement module208and/or the determining module210such as inFIG.2. In some examples, the apparatus400may comprise the measurement module208such as referred to inFIG.2, and this measurement module208may receive the indication of the characteristic from the sensor402and send the indication of the characteristic (e.g., an indication of the measurement of the electrical parameter) to (e.g., the determining module210of) the verifying entity. FIG.5is a schematic illustration of an example apparatus500for implementing or at least partially facilitating certain methods described herein (e.g., method100,300). The apparatus500comprises processing circuitry502, which may comprise a computing device such as computing device202. The apparatus500may comprise or communicate with certain modules or apparatus such as described in relation toFIG.2. In this example, the apparatus500comprises the sensor402ofFIG.4. The sensor402is not part of the processing circuitry502in this example but in other examples may form part of the processing circuitry502. In some examples, the sensor402comprises a capacitor array (such as described in relation toFIG.2) for measuring the characteristic of the electric field distribution at different locations of the apparatus500. In some examples, the capacitor array comprises a plurality of capacitors distributed across the apparatus500. The indication of the characteristic may be indicative of a capacitance value associated with the capacitor at each of the different locations. For example, each capacitor may have a certain associated capacitance value which is dependent on the electric field distribution associated with the computing device (and/or due to the electric field generated by the capacitor itself). The indication of the characteristic (i.e., the capacitance value for each of the plurality of capacitors) may be used to determine (e.g., with reference to a model) whether or not the computing device has been changed (e.g., tampered with). In some examples, certain capacitors of the capacitor array may be distributed uniformly across the apparatus500(for example, at least part of the apparatus500may comprise a plurality of uniformly distributed capacitors distributed according to a regular pattern such as described above). In some examples, certain capacitors of the capacitor array may be distributed non-uniformly across the apparatus500(for example, at least part of the apparatus500may comprise a plurality of non-uniformly distributed capacitors distributed according to an irregular pattern such as described above). In some examples the apparatus500further comprises a conductive element504. The conductive element504may modify the electric field distribution to create a signature electric field distribution linked to processing circuitry502(e.g., a computing device) of the apparatus500. For example, the conductive element504may be formed in such a way to modify the electric field distribution in a manner that is related to the identity of the apparatus500. Thus, each apparatus500comprising a computing device may be manufactured to have its own signature electric field distribution so that each apparatus500is distinguishable from the other apparatus500even though each apparatus500comprises the same components. In some examples, the conductive element504may be included with the apparatus500in such a way to facilitate measurements of the electric field. For example, the conductive element504may enhance or otherwise modify the electric field distribution in a readily detectable manner. In some examples, an entity such as unauthorized manufacturer may struggle to replicate the same electric field distribution with a compromised or unauthorized component/apparatus500. In some examples, the signature electric field distribution may be determined at a trusted point in the manufacturing of the apparatus500or at another appropriate time. This signature electric field distribution may serve as a trusted reference point which may, for example, be communicated to a repository (e.g., the model212) which can be accessed for future measurements to determine whether or not the apparatus500has been tampered with or cannot be trusted for some reason. In some examples, the conductive element504comprises a plurality of conductive particles. The plurality of conductive particles may be introduced into the PCA and/or the packaging (e.g., shell) of the apparatus500(e.g., by a trusted entity during manufacture of the apparatus500). In some examples, the conductive element504may be included in the apparatus500in a non-deterministic process to create a physically unclonable function for the signature electric field distribution. The signature electric field distribution may be difficult to replicate if the conductive element504is included using the non-deterministic process. Where the conductive element comprises a plurality of conductive particles, these particles may be included in the apparatus500in a non-deterministic fashion (e.g., they may be randomly incorporated into or otherwise combined with the apparatus500). FIG.6is a schematic illustration of an example apparatus600for implementing or at least partially facilitating certain methods described herein (e.g., method100,300). The apparatus600comprises processing circuitry602. The processing circuitry602may comprise certain modules or apparatus (e.g., the measurement module208) such as described in relation toFIG.2. In this example, the apparatus600is a separate/distinct apparatus to the apparatus400ofFIG.4. The apparatus600may be used to control operation of and/or receive measurements from the apparatus400. In some examples, the apparatus600may be combined with or otherwise integrated with the apparatus400. The processing circuitry602comprises a processing module604for determining whether or not a measurement is to be obtained from a sensor (e.g., the sensor204ofFIG.2) for measuring the electric field associated with a specified hardware configuration for the computing device (e.g., computing device202). Upon a determination being made that a measurement is to be obtained, the processing module604may cause the sensor204to obtain the measurement. This measurement (or an indication thereof) provided/sent by the sensor204may be received by the apparatus600itself or another apparatus (e.g., comprising the determining module210ofFIG.2) in order to allow a determination to be made regarding whether or not the electric field distribution associated with a specified hardware configuration for the computing device is as expected. In some examples, the processing module604comprises at least one dedicated processor (e.g., an application specific integrated circuit (ASIC) and/or field programmable gate array (FPGA), etc) for implementing the functionality of the processing module604described above. In some examples, the processing module604comprises at least one processor for implementing instructions which cause the at least one processor to implement the functionality of the processing module604described above. In such examples, the instructions may be stored in a machine-readable medium (not shown) accessible to the at least one processor. In some examples, the apparatus600comprises the machine-readable medium. In some examples, the machine-readable medium may be separate to the apparatus600(e.g., the at least one processor of the processing module604may be provided in communication with the machine readable medium to access the instructions stored therein). In some examples, the processing module602comprises the measurement module208ofFIG.2. In some examples, the processing circuitry602further comprises another module such as any one or combination of: the determining module210, the model212and the response module214ofFIG.2. FIG.7schematically illustrates a machine-readable medium700(e.g., a tangible machine-readable medium) which stores instructions702, which when executed by at least one processor704, cause the at least one processor704to carry out certain example methods described herein (e.g., the method100or300). The instructions702comprise instructions706to cause the at least one processor704to acquire an indication of an electrical parameter associated with a component (e.g., a ‘hardware component’) for detecting a hardware status of a computing device. In some examples, the instructions706may cause the at least one processor704to provide the same or similar functionality of block102of the method100. In some examples, the hardware status may refer to a hardware configuration of the computing device. In some examples, the hardware status may refer to a number, size, position and/or type of the component used in the computing device. In some examples, the indication may be indicative of an expected electric field distribution associated with a specified hardware configuration for the computing device. The instructions702further comprise instructions708to cause the at least one processor704to determine whether or not the indication meets a specified condition with respect to an estimated electrical parameter for the component. The estimated electrical parameter may be based on an electric field distribution model (e.g., the model212) for the computing device and the specified condition may be indicative of the hardware status of the computing device being as expected. The hardware status may be as expected if the component (or indeed any other part of the computing device) has not been tampered with or is otherwise considered to be not compromised or unauthorized. In some examples, the instructions708may cause the at least one processor704to provide the same or similar functionality to block104of the method100. The instructions702further comprise instructions710to cause the at least one processor704to cause a notification to be issued if the indication does not meet the specified condition. In some examples, the instructions710may cause the at least one processor704to provide the same or similar functionality to the response module214ofFIG.2. Certain modules of apparatus described herein may be combined with certain modules of other apparatus described herein. For example, any of the modules or apparatus described in relation to any one ofFIGS.2,4,5and6may be combined with or replace any of the modules or apparatus described in relation to any other ofFIGS.2,4,5and6. Further, certain modules or apparatus described herein may at least partially provide the same or similar functionality as certain methods described herein (e.g., methods100,300), and vice versa. Examples in the present disclosure can be provided as methods, systems or as a combination of machine readable instructions and processing circuitry. Such machine readable instructions may be included on a non-transitory machine (for example, computer) readable storage medium (including but not limited to disc storage, CD-ROM, optical storage, etc.) having computer readable program codes therein or thereon. The present disclosure is described with reference to flow charts and block diagrams of the method, devices and systems according to examples of the present disclosure. Although the flow charts described above show a specific order of execution, the order of execution may differ from that which is depicted. Blocks described in relation to one flow chart may be combined with those of another flow chart. It shall be understood that each block in the flow charts and/or block diagrams, as well as combinations of the blocks in the flow charts and/or block diagrams can be realized by machine readable instructions. The machine readable instructions may, for example, be executed by a general purpose computer, a special purpose computer, an embedded processor or processors of other programmable data processing devices to realize the functions described in the description and diagrams. In particular, a processor or processing circuitry, or a module thereof, may execute the machine readable instructions. Thus functional modules or apparatus of the system200, apparatus400,500,600(for example, the measurement module208, determining module210, model212, response module214, communication module406and/or processing module604) and devices may be implemented by a processor executing machine readable instructions stored in a memory, or a processor operating in accordance with instructions embedded in logic circuitry. The term ‘processor’ is to be interpreted broadly to include a CPU, processing unit, ASIC, logic unit, or programmable gate array etc. The methods and functional modules may all be performed by a single processor or divided amongst several processors. Such machine readable instructions may also be stored in a computer readable storage that can guide the computer or other programmable data processing devices to operate in a specific mode. Such machine readable instructions may also be loaded onto a computer or other programmable data processing devices, so that the computer or other programmable data processing devices perform a series of operations to produce computer-implemented processing, thus the instructions executed on the computer or other programmable devices realize functions specified by block(s) in the flow charts and/or in the block diagrams. Further, the teachings herein may be implemented in the form of a computer program product, the computer program product being stored in a storage medium and comprising a plurality of instructions for making a computer device implement the methods recited in the examples of the present disclosure. While the method, apparatus and related aspects have been described with reference to certain examples, various modifications, changes, omissions, and substitutions can be made without departing from the scope of the present disclosure. It is intended, therefore, that the method, apparatus and related aspects be limited by the scope of the following claims and their equivalents. It should be noted that the above-mentioned examples illustrate rather than limit what is described herein, and that many implementations may be designed without departing from the scope of the appended claims. Features described in relation to one example may be combined with features of another example. The word “comprising” does not exclude the presence of elements other than those listed in a claim, “a” or “an” does not exclude a plurality, and a single processor or other unit may fulfil the functions of several units recited in the claims. The features of any dependent claim may be combined with the features of any of the independent claims or other dependent claims.
64,746
11860208
DETAILED DESCRIPTION OF THE INVENTION FIG.1shows a schematic depiction of an electrical equivalent circuit diagram for a device according to the invention for testing the function of an antenna system. The antenna system is used for foreign metal detection, which reacts to the detection of a metallic object, for example a coin, a screw, a nail and the like, by outputting a warning and/or deactivating a technical system connected to the device. The device is used to be able to check defects in the antenna system and signal processing components connected to the antenna system for their functionality. In particular, the device described below is intended for use in an inductive vehicle charging system that involves energy being transmitted by means of the transformer principle over distances of between a few centimeters and approx. 20 cm. Such an energy transmission system involves a large magnetic field being created between an external floor coil and an on-vehicle underbody coil, depending on distance, design and power. When the floor coil is active, a metallic body located in the effective area of the floor coil can be heated. The temperatures arising in the metallic body can become so high that the housing enclosing the external floor coil, which is typically made of a plastic, can be damaged. In addition, there is the risk that the hot metallic body can ignite combustible substances in the vicinity. There is also the risk of burns for living beings that come into contact with the already heated metallic object. The device described below allows a test on the function of the antenna system and of the control and/or signal processing components connected downstream of the antenna system with regard to short circuits, open lines and the like. In the variant shown inFIG.1, only the detection of internal errors in the selection unit used therein, in particular the incorrect selection of a channel owing to an internal error (so-called internal “stuck at” error), which is not visible on the control lines (Select lines) of the selection unit, is not possible with certainty. The antenna system to be tested comprises a plurality of antenna units, in the example AE0, . . . , AE15. Each of the antenna units AE0, . . . , AE15comprises an antenna A0, . . . , A15, a first resistor R0, . . . , R15(so-called series resistor) interconnected in series with the antenna A0, . . . A15, and a second resistor RP0, . . . , RP15, which is merely optional and is interconnected in parallel with the series circuit comprising the first resistor R0, . . . R15and the antenna A0, . . . , A15. Each of the antenna units AE0, . . . , AE15is interconnected in each particular case between a node K biased with a bias voltage Vofst and an input, assigned to the respective antenna unit AE0, . . . , AE15, of a selection unit SE. Depending on the voltage supply (not shown), the bias voltage can have a positive or a negative value (in the case of a unipolar voltage supply) or can be at a ground potential. The interconnection is such that a respective node comprising antenna A0, . . . , A15and parallel resistor RP0, . . . , RP15is interconnected with the node K and a node comprising the series resistor R0, . . . , R15and the parallel resistor RP0, . . . , RP15of a respective antenna unit is interconnected with a respectively uniquely assigned input SEI0, . . . , SEI15of the selection unit SE. In the exemplary embodiment shown inFIG.1, the selection unit SE consists of a cascade of multiplexers MUX, MUXa, . . . , MUXd. The selection unit has two cascade stages MUX I and MUX II, merely by way of illustration. The cascade stage MUX II, which represents the input of the selection unit SE, has four multiplexers MUXa, . . . , MUXd in the present exemplary embodiment, by way of illustration. The cascade stage MUX I, which represents the output of the selection unit SE, has the multiplexer MUX. The outputs of the multiplexers MUXa, . . . , MUXd of the second cascade stage MUX II are accordingly connected to the inputs of the multiplexer MUX of the first cascade stage MUX I. An output of the multiplexer MUX represents an output SEO of the selection unit SE. It goes without saying that the selection unit SE can be formed from a different number of cascade stages (one, three or more). The number of multiplexers from the second cascade stage MUX II onward can also be selected differently than here. As will also become clear from the description that follows, each multiplexer MUXa, . . . , MUXd, MUX has four inputs and one output. It goes without saying that this is also merely illustrative. The output SEO of the selection unit SE is connected to an input CUI of a computing unit CU via a signal processing unit SPU, which comprises, for example, a filter and an amplifier and the like. The computing unit CU is designed to provide a control signal for the selection unit SE at a first output CUO1, wherein the control signal defines which input SEI0, . . . , SEI15of the selection unit SE is to be connected to the output SEO of the selection unit SE. As a result, the computing unit can determine which antenna unit AE0, . . . , AE15is connected to the computing unit for evaluating an antenna signal. The selection unit SE having multiple cascade stages means that two control signals CTRL_MUX_I and CTRL_MUX_II are required in the present case for controlling the multiplexer MUX of the first cascade stage MUX I and the multiplexers MUXa, . . . , MUXd of the second cascade stage MUX II. In practice, this means that the computing unit CU comprises four first outputs or output terminals for this purpose, which in the present case are combined under the first output CUO1. The computing unit CU is further designed to receive at its input CUI1the antenna signal present at the output SEO of the selection unit SE and processed by the signal processing unit SPU. The device further comprises a diagnostic circuit DC. The diagnostic circuit DC comprises a series circuit comprising a controllable switching element S1and a diagnostic resistor DR. The series circuit comprising the controllable switching element S1and the diagnostic resistor DR is interconnected between a diagnostic voltage connection K1and the output SEO of the selection unit SE. The controllable switching element S1is controlled using a control signal CTRL_DIAG, which is output at a second output CUO2by the computing unit CU. The control signal CTRL_DIAG that is output at the second output CUO2of the computing unit CU can be used by the computing unit CU to determine whether the controllable switching element S1is switched on or off. In the description that follows, when a controllable switching element S1has been switched on, the diagnostic circuit DC is referred to as active or activated, and when a controllable switching element S1has been switched off, the diagnostic circuit DC is referred to as inactive or deactivated. The diagnostic resistor DR together with the series resistor R0, . . . , R15of the antenna unit AE0, . . . , AE15currently selected by the control unit CU form a voltage divider, the potential that is present at the output SEO of the selection unit SE resulting in a different voltage level depending on the activation or deactivation of the diagnostic circuit DC. From the comparison of the antenna signals at the output SEO of the selection unit SE that are determined when the diagnostic circuit DC is activated and not activated, the computing unit CU can infer a fault in the antenna system and the location of the occurrence of the fault. Since the resistance values of the diagnostic resistor DR and the resistance values Ra, Rb of the series resistors R0, . . . , R15of the antenna units AE0, . . . , AE15are known, a voltage value that can be expected, both with the diagnostic circuit DC activated and with it deactivated, is obtained at the output SEO of the selection unit SE for each of the antenna units AE0, . . . , AE15if they are operating as intended. If a fault occurs, be it due to an open connection of the antenna, a short circuit to ground or a short circuit between two antennas, a voltage value that deviates from the expected value is obtained at the output SEO of the selection unit SE for the antenna unit AE0, . . . , AE15under consideration, on the other hand. This can be evaluated by the computing unit CU and, depending on the evaluation result, operation as intended or a fault and the location thereof can be inferred. The basic principle of diagnosis is as follows: A diagnosis for the antennas A0, . . . , A15or for the connection between a respective antenna A0, . . . , A15and the computing unit CU can be determined as a result of knowledge of the series resistance, which is known and defined for each antenna A0, . . . , A15, of magnitude Ra, Rb and the known magnitude of the diagnostic resistor DR. A specific nominal voltage is obtained for each of the antennas A0, . . . , A15. Short circuits to ground, open plug connections and short circuits between two antennas A0, . . . , A15can be determined in this way. If the voltage measured at the output SEO is Vsns=Vref (which is obtained from the known magnitudes of the series resistor Ra, Rb and the magnitude of the diagnostic resistor DR), then there is no fault. A short circuit to ground results in Vsns being very much lower than Vref. The following applies in the case of an open line: Vsns>Vref. In the event of a short circuit to the adjacent antenna: Vsns<Vref. By comparing Vsns with and without the diagnostic circuit DC activated, the measurement path of the signal processing unit SPU is also automatically checked as well. By evaluating the voltage value at the output SEO of the selection unit, the following faults can be inferred depending on the level of the voltage value: There is no fault if the voltage value is Va or Vb when the diagnostic circuit DC is activated. With an open line, a voltage value Vc is obtained. A short circuit to ground results in a voltage value Vd. In the event of a short circuit to an adjacent antenna, a voltage value Ve, Vf or Vg is obtained depending on whether the adjacent antenna has the same series resistance value Ra or Rb or a different series resistance value Ra or Rb. The voltage values Va, Vb, Vc, Vd, Ve, Vf and Vg are different voltage values that are obtained from the known magnitudes of the series resistor Ra, Rb and the magnitude of the diagnostic resistor DR and the fault that is currently occurring. The diagnosis for the control of the multiplexers MUX, MUXa, . . . , MUXd is thus carried out, among other things, by using at least two series resistors Ra, Rb of different magnitude per multiplexer MUXa, . . . , MUXd. A suitable choice as to which of the antennas A0, . . . , A15are provided with which series resistance value Ra or Rb allows all multiplexer controls to be checked for correct operation by means of the control signals CTRL_MUX_I, CTRL_MUX_II. A prerequisite for this is that each multiplexer MUXa, . . . , MUXd of the second cascade stage MUX II has a so-called “marker bit” MB (seeFIGS.4and5), i.e. each input of the multiplexer MUXa or MUXb or MUXc or MUXd currently under consideration must deliver a different result than the other inputs thereof. This allows control errors to be clearly identified. In addition, each multiplexer MUXa, . . . , MUXd of the second cascade MUX II must deliver a clear result pattern. This procedure is explained below with reference to the selection unit SE depicted in enlarged form inFIG.2and the result matrices shown inFIGS.3to5. The diagnosis of the type described here can, if the antenna system is installed in an inductive charging system, be carried out before the start of the charging process or during the inductive charging process. In the latter case, it is expedient to briefly interrupt charging and carry out the diagnosis as described herein. Alternatively, the antennas A0, . . . , A15can also be designed in such a way that the signal from the transmitting antenna is not completely compensated for to zero, as is the case with conventional metal detectors. In this way, a certain minimal signal can be measured by the computing unit during normal operation. If this signal disappears for one or more antennas, a fault can immediately be inferred. FIG.2shows an enlarged depiction of the selection unit SE fromFIG.1to explain the control signals used in the matrices ofFIGS.3to5. The control signals CTRL_MUX_I and CTRL_MUX_II used by the computing unit CU to control the multiplexers MUX, MUXa, . . . , MUXd can be seen, the bit values for the control signal CTRL_MUX_I being indicated by yy and bit values for the control signal CTRL_MUX_II being indicated by xx. Furthermore, the inputs SEI0, . . . , SEI15of the multiplexers MUXa, . . . , MUXd can be seen. It is readily apparent that the multiplexer MUXa comprises the inputs SEI0, . . . , SEI3, the multiplexer MUXb comprises the inputs SEI4, . . . , SEI7, the multiplexer MUXc comprises the inputs SEI8, SEI11and the multiplexer MUXd comprises the inputs SEI12, . . . , SEI15. FIG.3shows a matrix depicting the result values for the possible control situations of the selection unit SE. The possible signal values for yy for the control signal CTRL_MUX_I are indicated in columns and those for the values xx for the control signal CTRL_MUX_II are indicated in rows in the matrix. The matrix values Va and Vb denote the voltage values Vsns at the output SEO of the selection unit SEO. The voltage values Va and Vb are obtained depending on the resistance values Ra, Rb of the series resistors R0, . . . , R15that are assigned to the respective input SEI0, . . . , SEI15. From this matrix it can be seen that the value of the series resistor R3at the input SEI3of the multiplexer MUXa has the value Rb, while the values of the series resistors of the other inputs SEI0, SEI1and SEI2of the multiplexer MUXa have the value Ra. Similarly, the value of the series resistor R6connected to the input SEI6of the multiplexer MUXb is Rb, while the values of the other series resistors of the multiplexer MUXb are Ra. For the multiplexers MUXc and MUXd, the value of the series resistor R9and the value of the series resistor R12, which are connected to the inputs SEI9of the multiplexer MUXc and SEI12of the multiplexer MUXd, are Rb, while all the other values of the series resistors are Ra. The positions at which the voltage value Vb is obtained on the basis of a different voltage value Vsns at the output SEO of the selection unit SE can be referred to as marker bit MB. The marker bits are denoted by an ellipse in the matrix shown inFIG.3. When a fault occurs and the diagnostic circuit is activated, deviating voltage values Vsns (namely Vc, Vd, Ve or Vg, see above) are obtained at the output SEO of the selection unit SE. A comparison of the result values of the matrix for a functional antenna system and an antenna system that has a fault allows a fault to be inferred. FIG.4shows a further result matrix, which allows faults to be narrowed down when a control error occurs in the selection unit SE (first four columns inFIG.4, so-called “stuck at” error) and when a short circuit occurs in the control lines for controlling the selection unit SE (last four columns, so-called “control lines shorted” error).FIG.4shows the results table when there is a control error in the line for the multiplexer MUXb of the second cascade stage MUX II. By contrast,FIG.5shows an extended error matrix when there is an error in the control of the selection unit SE in relation to the high or low bit for the multiplexer MUX of the first cascade stage MUX II. FIG.5shows a schematic depiction of an electrical equivalent circuit diagram according to the invention, which is an extension of the device shown and described inFIG.1, and a detection of internal errors in the multiplexers, in particular the incorrect selection of a channel owing to an internal error that is not visible on the control lines CTRL_MUX_I, CTRL_MUX_II of the selection unit SE. The device shown inFIG.5has a further selection unit SE2, comprising a demultiplexer MUXf. An input SE2I of the demultiplexer MUXf is biased with the bias voltage Vofst. The number of outputs SE2O0, . . . , SE2O3corresponds to the number of multiplexers of the first cascade stage MUX II. In other words, the number of outputs SE2O0, . . . , SE2O3is four. Each of the outputs SE2O0, . . . , SE2O3is coupled in each particular case via the assigned resistor RP0, . . . , RP15to exactly one input SEI0, . . . , SEI15of the multiplexers of the first cascade stage MUX II, the relevant input being referred to as an antenna-group-specific input. The antenna-group-specific input is coupled to that input of each of the multiplexers MUXa, . . . , MUXd of the first cascade stage MUX II of the selection unit SE that has the identifier that is present in the form of an identical binary number as the control signal CTRL_MUX_II. As a result, when the demultiplexer MUXf connects the input SE2I to the output SE2O0(channel 0) on the basis of the control signal CTRL_MUX_II, the inputs SEI0(channel 0) of the multiplexer MUXa, SEI4(channel 0) of the multiplexer MUXb, SEI8(channel 0) of the multiplexer MUXc and SEI12(channel 0) of the multiplexer MUXd are connected to the bias voltage Vofst. The other inputs controlled by channels 1, 2 and 3, i.e. SEI1, SEI2, SEI3of the multiplexer MUXa, SEI5, SEI6, SEI7of the multiplexer MUXb, SEI9, SEWI10, SEI11of the multiplexer MUXc and SEI13, SEI14, SEI15of the multiplexer MUXd, are floating, on the other hand. If the control signal CTRL_MUX_II controls channel 1 of the demultiplexer MUXf and the multiplexers MUXa, . . . , MUXd, the input SE2I is connected to the output SE201, as a result of which the inputs SEI1of the multiplexer MUXa, SEI5of the multiplexer MUXb, SEI9of the multiplexer MUXc and SEI13of the multiplexer MUXd are connected to the bias voltage Vofst. The other inputs controlled by channels 0, 2 and 3, i.e. SEI0, SEI2, SEI3of the multiplexer MUXa, SEI4, SEI6, SEI7of the multiplexer MUXb, SEI8, SEWI10, SEI11of the multiplexer MUXc and SEI11, SEI14, SEI15of the multiplexer MUXd, are floating, on the other hand. The same applies if the control signal CTRL_MUX_II controls channel 2 or 3 of the demultiplexer MUXf and the multiplexers MUXa, . . . , MUXd. The monitoring of “stuck at” errors in the selection units SE, SE2can be effected by virtue of a respective specific number of antenna units, which are combined into groups at their common base, being selected via the demultiplexer. By comparing the position of the marker bits described above with the expected value, the correct control of all the multiplexers can be checked if a suitable group is selected. FIG.7shows a result matrix that allows faults to be narrowed down when an internal error occurs in the selection unit SE (columns 2 to 4 “MUXa: stuck@0”, “MUXa: stuck@3”, “MUXb: stuck@13” and “MUXe: stuck@2”) and when an internal error occurs in the further selection unit SE2(last column “MUXf: stuck@1”). The column heading “MUXa: stuck@0” means that channel 0 remains statically “selected” in the multiplexer MUXa, even if the control signal CTRL_MUX_II selects a different channel (here: 1, 2 or 3). The column labeled “NoE” shows the expected values of the voltage Vsns=Va or Vsns=Vb expected at the output SEO for all inputs SEI0, . . . , SEI15of the multiplexers if there is no error. An ellipse denotes deviations from the expected values in the respective error scenarios. In the event of an error in which e.g. the multiplexer MUXa statically “selects” channel 0 (column 2 of the table), the selection of channels 1, 2 or 3 results in voltage values Vsns=Vaux (where Vaux>>than Va or Vb) at the output SEO for the inputs SEI1, SEI2and SEI3, because the inputs SEI1(when channel 1 is selected), SEI2(when channel 2 is selected) and SEI3(when channel 3 is selected) are floating owing to the connection to Vofst not being able to be made. The same applies to the other error cases shown inFIG.7.
20,151
11860209
DETAILED DESCRIPTION Related US patents and patent applications include U.S. application Ser. No. 15/357,157, U.S. Pat. Nos. 9,537,586, 9,185,591, 8,977,212, 8,798,548, 8,805,291, 8,780,968, 8,824,536, 9,288,683, 9,078,162, U.S. application Ser. No. 13/913,013, and U.S. Application No. 61/789,758. All of them are incorporated herein by reference in their entirety. The present invention addresses the longstanding, unmet needs existing in the prior art and commercial sectors to provide solutions to the at least four major problems existing before the present invention, each one that requires near real time results on a continuous scanning of the target environment for the spectrum. The present invention relates to systems, methods, and devices of the various embodiments enable spectrum management by identifying, classifying, and cataloging signals of interest based on radio frequency measurements. Furthermore, present invention relates to spectrum analysis and management for radio frequency (RF) signals, and for automatically identifying baseline data and changes in state for signals from a multiplicity of devices in a wireless communications spectrum, and for providing remote access to measured and analyzed data through a virtualized computing network. In an embodiment, signals and the parameters of the signals may be identified and indications of available frequencies may be presented to a user. In another embodiment, the protocols of signals may also be identified. In a further embodiment, the modulation of signals, data types carried by the signals, and estimated signal origins may be identified. It is an object of this invention to provide an apparatus for identifying signal emitting devices including: a housing, at least one processor and memory, at least one receiver and sensors constructed and configured for sensing and measuring wireless communications signals from signal emitting devices in a spectrum associated with wireless communications; and wherein the apparatus is operable to automatically analyze the measured data to identify at least one signal emitting device in near real time from attempted detection and identification of the at least one signal emitting device, and then to identify open space available for wireless communications, based upon the information about the signal emitting device(s) operating in the predetermined spectrum; furthermore, the present invention provides baseline data and changes in state for compressed data to enable near real time analytics and results for individual units and for aggregated units for making unique comparisons of data. The present invention further provides systems for identifying white space in wireless communications spectrum by detecting and analyzing signals from any signal emitting devices including at least one apparatus, wherein the at least one apparatus is operable for network-based communication with at least one server computer including a database, and/or with at least one other apparatus, but does not require a connection to the at least one server computer to be operable for identifying signal emitting devices; wherein each of the apparatus is operable for identifying signal emitting devices including: a housing, at least one processor and memory, at least one receiver, and sensors constructed and configured for sensing and measuring wireless communications signals from signal emitting devices in a spectrum associated with wireless communications; and wherein the apparatus is operable to automatically analyze the measured data to identify at least one signal emitting device in near real time from attempted detection and identification of the at least one signal emitting device, and then to identify open space available for wireless communications, based upon the information about the signal emitting device(s) operating in the predetermined spectrum; all of the foregoing using baseline data and changes in state for compressed data to enable near real time analytics and results for individual units and for aggregated units for making unique comparisons of data. The present invention is further directed to a method for identifying baseline data and changes in state for compressed data to enable near real time analytics and results for individual units and for aggregated units and storing the aggregated data in a database and providing secure, remote access to the compressed data for each unit and to the aggregated data via network-based virtualized computing system or cloud-based system, for making unique comparisons of data in a wireless communications spectrum including the steps of: providing a device for measuring characteristics of signals from signal emitting devices in a spectrum associated with wireless communications, with measured data characteristics including frequency, power, bandwidth, duration, modulation, and combinations thereof, the device including a housing, at least one processor and memory, and sensors constructed and configured for sensing and measuring wireless communications signals within the spectrum; and further including the following steps performed within the device housing: assessing whether the measured data includes analog and/or digital signal(s); determining a best fit based on frequency, if the measured power spectrum is designated in an historical or a reference database(s) for frequency ranges; automatically determining a category for either analog or digital signals, based on power and sideband combined with frequency allocation; determining a TDM/FDM/CDM signal, based on duration and bandwidth; identifying at least one signal emitting device from the composite results of the foregoing steps; and then automatically identifying the open space available for wireless communications, based upon the information about the signal emitting device(s) operating in the predetermined spectrum; all using baseline data and changes in state for compressed data to enable near real time analytics and results for individual units and for aggregated units for making unique comparisons of data. Additionally, the present invention provides systems, apparatus, and methods for identifying open space in a wireless communications spectrum using an apparatus having a multiplicity of processors and memory, at least one receiver, sensors, and communications transmitters and receivers, all constructed and configured within a housing for automated analysis of detected signals from signal emitting devices, determination of signal duration and other signal characteristics, and automatically generating information relating to device identification, open space, signal optimization, all using baseline data and changes in state for compressed data to enable near real time analytics and results for individual units and for aggregated units for making unique comparisons of data within the spectrum for wireless communication, and for providing secure, remote access via a network to the data stored in a virtualized computer system. Referring now to the drawings in general, the illustrations are for the purpose of describing at least one preferred embodiment and/or examples of the invention and are not intended to limit the invention thereto. Various embodiments are described in detail with reference to the accompanying drawings. Wherever possible, the same reference numbers are used throughout the drawings to refer to the same or like parts. References made to particular examples and implementations are for illustrative purposes, and are not intended to limit the scope of the invention or the claims. The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any implementation described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other implementations. The present invention provides systems, methods, and devices for spectrum analysis and management by identifying, classifying, and cataloging at least one or a multiplicity of signals of interest based on radio frequency measurements and location and other measurements, and using near real-time parallel processing of signals and their corresponding parameters and characteristics in the context of historical and static data for a given spectrum, and more particularly, all using baseline data and changes in state for compressed data to enable near real time analytics and results for individual units and for aggregated units for making unique comparisons of data. The systems, methods and apparatus according to the present invention preferably have the ability to detect in near real time, and more preferably to detect, sense, measure, and/or analyze in near real time, and more preferably to perform any near real time operations within about 1 second or less. Advantageously, the present invention and its real time functionality described herein uniquely provide and enable the apparatus units to compare to historical data, to update data and/or information, and/or to provide more data and/or information on the open space, on the apparatus unit or device that may be occupying the open space, and combinations, in the near real time compared with the historically scanned (15 min to 30 days) data, or historical database information. Also, the data from each apparatus unit or device and/or for aggregated data from more than one apparatus unit or device are communicated via a network to at least one server computer and stored on a database in a virtualized or cloud-based computing system, and the data is available for secure, remote access via the network from distributed remote devices having software applications (apps) operable thereon, for example by web access (mobile app) or computer access (desktop app). The systems, methods, and devices of the various embodiments enable spectrum management by identifying, classifying, and cataloging signals of interest based on radio frequency measurements. In an embodiment, signals and the parameters of the signals may be identified and indications of available frequencies may be presented to a user. In another embodiment, the protocols of signals may also be identified. In a further embodiment, the modulation of signals, data types carried by the signals, and estimated signal origins may be identified. Embodiments are directed to a spectrum management device that may be configurable to obtain spectrum data over a wide range of wireless communication protocols. Embodiments may also provide for the ability to acquire data from and sending data to database depositories that may be used by a plurality of spectrum management customers. In one embodiment, a spectrum management device may include a signal spectrum analyzer that may be coupled with a database system and spectrum management interface. The device may be portable or may be a stationary installation and may be updated with data to allow the device to manage different spectrum information based on frequency, bandwidth, signal power, time, and location of signal propagation, as well as modulation type and format and to provide signal identification, classification, and geo-location. A processor may enable the device to process spectrum power density data as received and to process raw I/Q complex data that may be used for further signal processing, signal identification, and data extraction. In an embodiment, a spectrum management device or apparatus unit may comprise a low noise amplifier that receives a radio frequency (RF) energy from an antenna. The antenna may be any antenna structure that is capable of receiving RF energy in a spectrum of interest. The low noise amplifier may filter and amplify the RF energy. The RF energy may be provided to an RF translator. The RF translator may perform a fast Fourier transform (FFT) and either a square magnitude or a fast convolution spectral periodogram function to convert the RF measurements into a spectral representation. In an embodiment, the RF translator may also store a timestamp to facilitate calculation of a time of arrival and an angle of arrival. The In-Phase and Quadrature (I/Q) data may be provided to a spectral analysis receiver or it may be provided to a sample data store where it may be stored without being processed by a spectral analysis receiver. The input RF energy may also be directly digital down-converted and sampled by an analog to digital converter (ADC) to generate complex I/Q data. The complex I/Q data may be equalized to remove multipath, fading, white noise and interference from other signaling systems by fast parallel adaptive filter processes. This data may then be used to calculate modulation type and baud rate. Complex sampled I/Q data may also be used to measure the signal angle of arrival and time of arrival. Such information as angle of arrival and time of arrival may be used to compute more complex and precise direction finding. In addition, they may be used to apply geo-location techniques. Data may be collected from known signals or unknown signals and time spaced in order to provide expedient information. I/Q sampled data may contain raw signal data that may be used to demodulate and translate signals by streaming them to a signal analyzer or to a real-time demodulator software defined radio that may have the newly identified signal parameters for the signal of interest. The inherent nature of the input RF allows for any type of signal to be analyzed and demodulated based on the reconfiguration of the software defined radio interfaces. A spectral analysis receiver may be configured to read raw In-Phase (I) and Quadrature (Q) data and either translate directly to spectral data or down convert to an intermediate frequency (IF) up to half the Nyquist sampling rate to analyze the incoming bandwidth of a signal. The translated spectral data may include measured values of signal energy, frequency, and time. The measured values provide attributes of the signal under review that may confirm the detection of a particular signal of interest within a spectrum of interest. In an embodiment, a spectral analysis receiver may have a referenced spectrum input of 0 Hz to 12.4 GHz, preferably not lower than 9 kHz, with capability of fiber optic input for spectrum input up to 60 GHz. For each device, at least one receiver is used. In one embodiment, the spectral analysis receiver may be configured to sample the input RF data by fast analog down-conversion of the RF signal. The down-converted signal may then be digitally converted and processed by fast convolution filters to obtain a power spectrum. This process may also provide spectrum measurements including the signal power, the bandwidth, the center frequency of the signal as well as a Time of Arrival (TOA) measurement. The TOA measurement may be used to create a timestamp of the detected signal and/or to generate a time difference of arrival iterative process for direction finding and fast triangulation of signals. In an embodiment, the sample data may be provided to a spectrum analysis module. In an embodiment, the spectrum analysis module may evaluate the sample data to obtain the spectral components of the signal. In an embodiment, the spectral components of the signal may be obtained by the spectrum analysis module from the raw I/Q data as provided by an RF translator. The I/Q data analysis performed by the spectrum analysis module may operate to extract more detailed information about the signal, including by way of example, modulation type (e.g., FM, AM, QPSK, 16QAM, etc.) and/or protocol (e.g., GSM, CDMA, OFDM, LTE, etc.). In an embodiment, the spectrum analysis module may be configured by a user to obtain specific information about a signal of interest. In an alternate embodiment, the spectral components of the signal may be obtained from power spectral component data produced by the spectral analysis receiver. In an embodiment, the spectrum analysis module may provide the spectral components of the signal to a data extraction module. The data extraction module may provide the classification and categorization of signals detected in the RF spectrum. The data extraction module may also acquire additional information regarding the signal from the spectral components of the signal. For example, the data extraction module may provide modulation type, bandwidth, and possible system in use information. In another embodiment, the data extraction module may select and organize the extracted spectral components in a format selected by a user. The information from the data extraction module may be provided to a spectrum management module. The spectrum management module may generate a query to a static database to classify a signal based on its components. For example, the information stored in static database may be used to determine the spectral density, center frequency, bandwidth, baud rate, modulation type, protocol (e.g., GSM, CDMA, OFDM, LTE, etc.), system or carrier using licensed spectrum, location of the signal source, and a timestamp of the signal of interest. These data points may be provided to a data store for export. In an embodiment and as more fully described below, the data store may be configured to access mapping software to provide the user with information on the location of the transmission source of the signal of interest. In an embodiment, the static database includes frequency information gathered from various sources including, but not limited to, the Federal Communication Commission, the International Telecommunication Union, and data from users. As an example, the static database may be an SQL database. The data store may be updated, downloaded or merged with other devices or with its main relational database. Software API applications may be included to allow database merging with third-party spectrum databases that may only be accessed securely. In the various embodiments, the spectrum management device may be configured in different ways. In an embodiment, the front end of the system may comprise various hardware receivers that may provide In-Phase and Quadrature complex data. The front end receiver may include API set commands via which the system software may be configured to interface (i.e., communicate) with a third party receiver. In an embodiment, the front end receiver may perform the spectral computations using FFT (Fast Fourier Transform) and other DSP (Digital Signal Processing) to generate a fast convolution periodogram that may be re-sampled and averaged to quickly compute the spectral density of the RF environment. In an embodiment, cyclic processes may be used to average and correlate signal information by extracting the changes inside the signal to better identify the signal of interest that is present in the RF space. A combination of amplitude and frequency changes may be measured and averaged over the bandwidth time to compute the modulation type and other internal changes, such as changes in frequency offsets, orthogonal frequency division modulation, changes in time (e.g., Time Division Multiplexing), and/or changes in I/Q phase rotation used to compute the baud rate and the modulation type. In an embodiment, the spectrum management device may have the ability to compute several processes in parallel by use of a multi-core processor and along with several embedded field programmable gate arrays (FPGA). Such multi-core processing may allow the system to quickly analyze several signal parameters in the RF environment at one time in order to reduce the amount of time it takes to process the signals. The amount of signals computed at once may be determined by their bandwidth requirements. Thus, the capability of the system may be based on a maximum frequency Fs/2. The number of signals to be processed may be allocated based on their respective bandwidths. In another embodiment, the signal spectrum may be measured to determine its power density, center frequency, bandwidth and location from which the signal is emanating and a best match may be determined based on the signal parameters based on information criteria of the frequency. In another embodiment, a GPS and direction finding location (DF) system may be incorporated into the spectrum management device and/or available to the spectrum management device. Adding GPS and DF ability may enable the user to provide a location vector using the National Marine Electronics Association's (NMEA) standard form. In an embodiment, location functionality is incorporated into a specific type of GPS unit, such as a U.S. government issued receiver. The information may be derived from the location presented by the database internal to the device, a database imported into the device, or by the user inputting geo-location parameters of longitude and latitude which may be derived as degrees, minutes and seconds, decimal minutes, or decimal form and translated to the necessary format with the default being ‘decimal’ form. This functionality may be incorporated into a GPS unit. The signal information and the signal classification may then be used to locate the signaling device as well as to provide a direction finding capability. A type of triangulation using three units as a group antenna configuration performs direction finding by using multilateration. Commonly used in civil and military surveillance applications, multilateration is able to accurately locate an aircraft, vehicle, or stationary emitter by measuring the “Time Difference of Arrival” (TDOA) of a signal from the emitter at three or more receiver sites. If a pulse is emitted from a platform, it will arrive at slightly different times at two spatially separated receiver sites, the TDOA being due to the different distances of each receiver from the platform. This location information may then be supplied to a mapping process that utilizes a database of mapping images that are extracted from the database based on the latitude and longitude provided by the geo-location or direction finding device. The mapping images may be scanned in to show the points of interest where a signal is either expected to be emanating from based on the database information or from an average taken from the database information and the geo-location calculation performed prior to the mapping software being called. The user can control the map to maximize or minimize the mapping screen to get a better view which is more fit to provide information of the signal transmissions. In an embodiment, the mapping process does not rely on outside mapping software. The mapping capability has the ability to generate the map image and to populate a mapping database that may include information from third party maps to meet specific user requirements. In an embodiment, triangulation and multilateration may utilize a Bayesian type filter that may predict possible movement and future location and operation of devices based on input collected from the TDOA and geolocation processes and the variables from the static database pertaining to the specified signal of interest. The Bayesian filter takes the input changes in time difference and its inverse function (i.e., frequency difference) and takes an average change in signal variation to detect and predict the movement of the signals. The signal changes are measured within 1 ns time difference and the filter may also adapt its gradient error calculation to remove unwanted signals that may cause errors due to signal multipath, inter-symbol interference, and other signal noise. In an embodiment the changes within a 1 ns time difference for each sample for each unique signal may be recorded. The spectrum management device may then perform the inverse and compute and record the frequency difference and phase difference between each sample for each unique signal. The spectrum management device may take the same signal and calculates an error based on other input signals coming in within the 1 ns time and may average and filter out the computed error to equalize the signal. The spectrum management device may determine the time difference and frequency difference of arrival for that signal and compute the odds of where the signal is emanating from based on the frequency band parameters presented from the spectral analysis and processor computations, and determines the best position from which the signal is transmitted (i.e., origin of the signal). FIG.1illustrates a wireless environment100suitable for use with the various embodiments. The wireless environment100may include various sources104,106,108,110,112, and114generating various radio frequency (RF) signals116,118,120,122,124,126. As an example, mobile devices104may generate cellular RF signals116, such as CDMA, GSM, 3G signals, etc. As another example, wireless access devices106, such as Wi-Fi® routers, may generate RF signals118, such as Wi-Fi® signals. As a further example, satellites108, such as communication satellites or GPS satellites, may generate RF signals120, such as satellite radio, television, or GPS signals. As a still further example, base stations110, such as a cellular base station, may generate RF signals122, such as CDMA, GSM, 3G signals, etc. As another example, radio towers112, such as local AM or FM radio stations, may generate RF signals124, such as AM or FM radio signals. As another example, government service provides114, such as police units, fire fighters, military units, air traffic control towers, etc. may generate RF signals126, such as radio communications, tracking signals, etc. The various RF signals116,118,120,122,124,126may be generated at different frequencies, power levels, in different protocols, with different modulations, and at different times. The various sources104,106,108,110,112, and114may be assigned frequency bands, power limitations, or other restrictions, requirements, and/or licenses by a government spectrum control entity, such as the FCC. However, with so many different sources104,106,108,110,112, and114generating so many different RF signals116,118,120,122,124,126, overlaps, interference, and/or other problems may occur. A spectrum management device102in the wireless environment100may measure the RF energy in the wireless environment100across a wide spectrum and identify the different RF signals116,118,120,122,124,126which may be present in the wireless environment100. The identification and cataloging of the different RF signals116,118,120,122,124,126which may be present in the wireless environment100may enable the spectrum management device102to determine available frequencies for use in the wireless environment100. In addition, the spectrum management device102may be able to determine if there are available frequencies for use in the wireless environment100under certain conditions (i.e., day of week, time of day, power level, frequency band, etc.). In this manner, the RF spectrum in the wireless environment100may be managed. FIG.2Ais a block diagram of a spectrum management device202according to an embodiment. The spectrum management device202may include an antenna structure204configured to receive RF energy expressed in a wireless environment. The antenna structure204may be any type antenna, and may be configured to optimize the receipt of RF energy across a wide frequency spectrum. The antenna structure204may be connected to one or more optional amplifiers and/or filters208which may boost, smooth, and/or filter the RF energy received by antenna structure204before the RF energy is passed to an RF receiver210connected to the antenna structure204. In an embodiment, the RF receiver210may be configured to measure the RF energy received from the antenna structure204and/or optional amplifiers and/or filters208. In an embodiment, the RF receiver210may be configured to measure RF energy in the time domain and may convert the RF energy measurements to the frequency domain. In an embodiment, the RF receiver210may be configured to generate spectral representation data of the received RF energy. The RF receiver210may be any type RF receiver, and may be configured to generate RF energy measurements over a range of frequencies, such as 0 kHz to 24 GHz, 9 kHz to 6 GHz, etc. In an embodiment, the frequency scanned by the RF receiver210may be user selectable. In an embodiment, the RF receiver210may be connected to a signal processor214and may be configured to output RF energy measurements to the signal processor214. As an example, the RF receiver210may output raw In-Phase (I) and Quadrature (Q) data to the signal processor214. As another example, the RF receiver210may apply signals processing techniques to output complex In-Phase (I) and Quadrature (Q) data to the signal processor214. In an embodiment, the spectrum management device may also include an antenna206connected to a location receiver212, such as a GPS receiver, which may be connected to the signal processor214. The location receiver212may provide location inputs to the signal processor214. The signal processor214may include a signal detection module216, a comparison module222, a timing module224, and a location module225. Additionally, the signal processor214may include an optional memory module226which may include one or more optional buffers228for storing data generated by the other modules of the signal processor214. In an embodiment, the signal detection module216may operate to identify signals based on the RF energy measurements received from the RF receiver210. The signal detection module216may include a Fast Fourier Transform (FFT) module217which may convert the received RF energy measurements into spectral representation data. The signal detection module216may include an analysis module221which may analyze the spectral representation data to identify one or more signals above a power threshold. A power module220of the signal detection module216may control the power threshold at which signals may be identified. In an embodiment, the power threshold may be a default power setting or may be a user selectable power setting. A noise module219of the signal detection module216may control a signal threshold, such as a noise threshold, at or above which signals may be identified. The signal detection module216may include a parameter module218which may determine one or more signal parameters for any identified signals, such as center frequency, bandwidth, power, number of detected signals, frequency peak, peak power, average power, signal duration, etc. In an embodiment, the signal processor214may include a timing module224which may record time information and provide the time information to the signal detection module216. Additionally, the signal processor214may include a location module225which may receive location inputs from the location receiver212and determine a location of the spectrum management device202. The location of the spectrum management device202may be provided to the signal detection module216. In an embodiment, the signal processor214may be connected to one or more memory230. The memory230may include multiple databases, such as a history or historical database232and characteristics listing236, and one or more buffers240storing data generated by signal processor214. While illustrated as connected to the signal processor214the memory230may also be on chip memory residing on the signal processor214itself. In an embodiment, the history or historical database232may include measured signal data234for signals that have been previously identified by the spectrum management device202. The measured signal data234may include the raw RF energy measurements, time stamps, location information, one or more signal parameters for any identified signals, such as center frequency, bandwidth, power, number of detected signals, frequency peak, peak power, average power, signal duration, etc., and identifying information determined from the characteristics listing236. In an embodiment, the history or historical database232may be updated as signals are identified by the spectrum management device202. In an embodiment, the characteristic listing236may be a database of static signal data238. The static signal data238may include data gathered from various sources including by way of example and not by way of limitation the Federal Communication Commission, the International Telecommunication Union, telecom providers, manufacture data, and data from spectrum management device users. Static signal data238may include known signal parameters of transmitting devices, such as center frequency, bandwidth, power, number of detected signals, frequency peak, peak power, average power, signal duration, geographic information for transmitting devices, and any other data that may be useful in identifying a signal. In an embodiment, the static signal data238and the characteristic listing236may correlate signal parameters and signal identifications. As an example, the static signal data238and characteristic listing236may list the parameters of the local fire and emergency communication channel correlated with a signal identification indicating that signal is the local fire and emergency communication channel. In an embodiment, the signal processor214may include a comparison module222which may match data generated by the signal detection module216with data in the history or historical database232and/or characteristic listing236. In an embodiment the comparison module222may receive signal parameters from the signal detection module216, such as center frequency, bandwidth, power, number of detected signals, frequency peak, peak power, average power, signal duration, and/or receive parameter from the timing module224and/or location module225. The parameter match module223may retrieve data from the history or historical database232and/or the characteristic listing236and compare the retrieved data to any received parameters to identify matches. Based on the matches the comparison module may identify the signal. In an embodiment, the signal processor214may be optionally connected to a display242, an input device244, and/or network transceiver246. The display242may be controlled by the signal processor214to output spectral representations of received signals, signal characteristic information, and/or indications of signal identifications on the display242. In an embodiment, the input device244may be any input device, such as a keyboard and/or knob, mouse, virtual keyboard or even voice recognition, enabling the user of the spectrum management device202to input information for use by the signal processor214. In an embodiment, the network transceiver246may enable the spectrum management device202to exchange data with wired and/or wireless networks, such as to update the characteristic listing236and/or upload information from the history or historical database232. FIG.2Bis a schematic logic flow block diagram illustrating logical operations which may be performed by a spectrum management device202according to an embodiment. A receiver210may output RF energy measurements, such as I and Q data to an FFT module252which may generate a spectral representation of the RF energy measurements which may be output on a display242. The I and Q data may also be buffered in a buffer256and sent to a signal detection module216. The signal detection module216may receive location inputs from a location receiver212and use the received I and Q data to detect signals. Data from the signal detection module216may be buffered in a buffer262and written into a history or historical database232. Additionally, data from the historical database may be used to aid in the detection of signals by the signal detection module216. The signal parameters of the detected signals may be determined by a signal parameters module218using information from the history or historical database232and/or a static database238listing signal characteristics through a buffer268. Data from the signal parameters module218may be stored in the history or historical database232and/or sent to the signal detection module216and/or display242. In this manner, signals may be detected and indications of the signal identification may be displayed to a user of the spectrum management device. FIG.3illustrates a process flow of an embodiment method300for identifying a signal. In an embodiment the operations of method300may be performed by the processor214of a spectrum management device202. In block302the processor214may determine the location of the spectrum management device202. In an embodiment, the processor214may determine the location of the spectrum management device202based on a location input, such as GPS coordinates, received from a location receiver, such as a GPS receiver212. In block304the processor214may determine the time. As an example, the time may be the current clock time as determined by the processor214and may be a time associated with receiving RF measurements. In block306the processor214may receive RF energy measurements. In an embodiment, the processor214may receive RF energy measurements from an RF receiver210. In block308the processor214may convert the RF energy measurements to spectral representation data. As an example, the processor may apply a Fast Fourier Transform (FFT) to the RF energy measurements to convert them to spectral representation data. In optional block310the processor214may display the spectral representation data on a display242of the spectrum management device202, such as in a graph illustrating amplitudes across a frequency spectrum. In block312the processor214may identify one or more signal above a threshold. In an embodiment, the processor214may analyze the spectral representation data to identify a signal above a power threshold. A power threshold may be an amplitude measure selected to distinguish RF energies associated with actual signals from noise. In an embodiment, the power threshold may be a default value. In another embodiment, the power threshold may be a user selectable value. In block314the processor214may determine signal parameters of any identified signal or signals of interest. As examples, the processor214may determine signal parameters such as center frequency, bandwidth, power, number of detected signals, frequency peak, peak power, average power, signal duration for the identified signals. In block316the processor214may store the signal parameters of each identified signal, a location indication, and time indication for each identified signal in a history database232. In an embodiment, a history database232may be a database resident in a memory230of the spectrum management device202which may include data associated with signals actually identified by the spectrum management device. In block318the processor214may compare the signal parameters of each identified signal to signal parameters in a signal characteristic listing. In an embodiment, the signal characteristic listing may be a static database238stored in the memory230of the spectrum management device202which may correlate signal parameters and signal identifications. In determination block320the processor214may determine whether the signal parameters of the identified signal or signals match signal parameters in the characteristic listing236. In an embodiment, a match may be determined based on the signal parameters being within a specified tolerance of one another. As an example, a center frequency match may be determined when the center frequencies are within plus or minus 1 kHz of each other. In this manner, differences between real world measured conditions of an identified signal and ideal conditions listed in a characteristics listing may be accounted for in identifying matches. If the signal parameters do not match (i.e., determination block320=“No”), in block326the processor214may display an indication that the signal is unidentified on a display242of the spectrum management device202. In this manner, the user of the spectrum management device may be notified that a signal is detected, but has not been positively identified. If the signal parameters do match (i.e., determination block320=“Yes”), in block324the processor214may display an indication of the signal identification on the display242. In an embodiment, the signal identification displayed may be the signal identification correlated to the signal parameter in the signal characteristic listing which matched the signal parameter for the identified signal. Upon displaying the indications in blocks324or326the processor214may return to block302and cyclically measure and identify further signals of interest. FIG.4illustrates an embodiment method400for measuring sample blocks of a radio frequency scan. In an embodiment the operations of method400may be performed by the processor214of a spectrum management device202. As discussed above, in blocks306and308the processor214may receive RF energy measurements and convert the RF energy measurements to spectral representation data. In block402the processor214may determine a frequency range at which to sample the RF spectrum for signals of interest. In an embodiment, a frequency range may be a frequency range of each sample block to be analyzed for potential signals. As an example, the frequency range may be 240 kHz. In an embodiment, the frequency range may be a default value. In another embodiment, the frequency range may be a user selectable value. In block404the processor214may determine a number (N) of sample blocks to measure. In an embodiment, each sample block may be sized to the determined of default frequency range, and the number of sample blocks may be determined by dividing the spectrum of the measured RF energy by the frequency range. In block406the processor214may assign each sample block a respective frequency range. As an example, if the determined frequency range is 240 kHz, the first sample block may be assigned a frequency range from 0 kHz to 240 kHz, the second sample block may be assigned a frequency range from 240 kHz to 480 kHz, etc. In block408the processor214may set the lowest frequency range sample block as the current sample block. In block409the processor214may measure the amplitude across the set frequency range for the current sample block. As an example, at each frequency interval (such as 1 Hz) within the frequency range of the sample block the processor214may measure the received signal amplitude. In block410the processor214may store the amplitude measurements and corresponding frequencies for the current sample block. In determination block414the processor214may determine if all sample blocks have been measured. If all sample blocks have not been measured (i.e., determination block414=“No”), in block416the processor214may set the next highest frequency range sample block as the current sample block. As discussed above, in blocks409,410, and414the processor214may measure and store amplitudes and determine whether all blocks are sampled. If all blocks have been sampled (i.e., determination block414=“Yes”), the processor214may return to block306and cyclically measure further sample blocks. FIGS.5A,5B, and5Cillustrate the process flow for an embodiment method500for determining signal parameters. In an embodiment, the operations of method500may be performed by the processor214of a spectrum management device202. Referring toFIG.5A, in block502the processor214may receive a noise floor average setting. In an embodiment, the noise floor average setting may be an average noise level for the environment in which the spectrum management device202is operating. In an embodiment, the noise floor average setting may be a default setting and/or may be user selectable setting. In block504the processor214may receive the signal power threshold setting. In an embodiment, the signal power threshold setting may be an amplitude measure selected to distinguish RF energies associated with actual signals from noise. In an embodiment, the signal power threshold may be a default value and/or may be a user selectable setting. In block506the processor214may load the next available sample block. In an embodiment, the sample blocks may be assembled according to the operations of method400described above with reference toFIG.4. In an embodiment, the next available sample block may be an oldest in time sample block which has not been analyzed to determine whether signals of interest are present in the sample block. In block508the processor214may average the amplitude measurements in the sample block. In determination block510the processor214may determine whether the average for the sample block is greater than or equal to the noise floor average set in block502. In this manner, sample blocks including potential signals may be quickly distinguished from sample blocks which may not include potential signals reducing processing time by enabling sample blocks without potential signals to be identified and ignored. If the average for the sample block is lower than the noise floor average (i.e., determination block510=“No”), no signals of interest may be present in the current sample block. In determination block514the processor214may determine whether a cross block flag is set. If the cross block flag is not set (i.e., determination block514=“No”), in block506the processor214may load the next available sample block and in block508average the sample block508. If the average of the sample block is equal to or greater than the noise floor average (i.e., determination block510=“Yes”), the sample block may potentially include a signal of interest and in block512the processor214may reset a measurement counter (C) to 1. The measurement counter value indicating which sample within a sample block is under analysis. In determination block516the processor214may determine whether the RF measurement of the next frequency sample (C) is greater than the signal power threshold. In this manner, the value of the measurement counter (C) may be used to control which sample RF measurement in the sample block is compared to the signal power threshold. As an example, when the counter (C) equals 1, the first RF measurement may be checked against the signal power threshold and when the counter (C) equals 2 the second RF measurement in the sample block may be checked, etc. If the C RF measurement is less than or equal to the signal power threshold (i.e., determination block516=“No”), in determination block517the processor214may determine whether the cross block flag is set. If the cross block flag is not set (i.e., determination block517=“No”), in determination block522the processor214may determine whether the end of the sample block is reached. If the end of the sample block is reached (i.e., determination block522=“Yes”), in block506the processor214may load the next available sample block and proceed in blocks508,510,514, and512as discussed above. If the end of the sample block is not reached (i.e., determination block522=“No”), in block524the processor214may increment the measurement counter (C) so that the next sample in the sample block is analyzed. If the C RF measurement is greater than the signal power threshold (i.e., determination block516=“Yes”), in block518the processor214may check the status of the cross block flag to determine whether the cross block flag is set. If the cross block flag is not set (i.e., determination block518=“No”), in block520the processor214may set a sample start. As an example, the processor214may set a sample start by indicating a potential signal of interest may be discovered in a memory by assigning a memory location for RF measurements associated with the sample start. Referring toFIG.5B, in block526the processor214may store the C RF measurement in a memory location for the sample currently under analysis. In block528the processor214may increment the measurement counter (C) value. In determination block530the processor214may determine whether the C RF measurement (e.g., the next RF measurement because the value of the RF measurement counter was incremented) is greater than the signal power threshold. If the C RF measurement is greater than the signal power threshold (i.e., determination block530=“Yes”), in determination block532the processor214may determine whether the end of the sample block is reached. If the end of the sample block is not reached (i.e., determination block532=“No”), there may be further RF measurements available in the sample block and in block526the processor214may store the C RF measurement in the memory location for the sample. In block528the processor may increment the measurement counter (C) and in determination block530determine whether the C RF measurement is above the signal power threshold and in block532determine whether the end of the sample block is reached. In this manner, successive sample RF measurements may be checked against the signal power threshold and stored until the end of the sample block is reached and/or until a sample RF measurement falls below the signal power threshold. If the end of the sample block is reached (i.e., determination block532=“Yes”), in block534the processor214may set the cross block flag. In an embodiment, the cross block flag may be a flag in a memory available to the processor214indicating the signal potential spans across two or more sample blocks. In a further embodiment, prior to setting the cross block flag in block534, the slope of a line drawn between the last two RF measurement samples may be used to determine whether the next sample block likely contains further potential signal samples. A negative slope may indicate that the signal of interest is fading and may indicate the last sample was the final sample of the signal of interest. In another embodiment, the slope may not be computed and the next sample block may be analyzed regardless of the slope. If the end of the sample block is reached (i.e., determination block532=“Yes”) and in block534the cross block flag is set, referring toFIG.5A, in block506the processor214may load the next available sample block, in block508may average the sample block, and in block510determine whether the average of the sample block is greater than or equal to the noise floor average. If the average is equal to or greater than the noise floor average (i.e., determination block510=“Yes”), in block512the processor214may reset the measurement counter (C) to 1. In determination block516the processor214may determine whether the C RF measurement for the current sample block is greater than the signal power threshold. If the C RF measurement is greater than the signal power threshold (i.e., determination block516=“Yes”), in determination block518the processor214may determine whether the cross block flag is set. If the cross block flag is set (i.e., determination block518=“Yes”), referring toFIG.5B, in block526the processor214may store the C RF measurement in the memory location for the sample and in block528the processor may increment the measurement counter (C). As discussed above, in blocks530and532the processor214may perform operations to determine whether the C RF measurement is greater than the signal power threshold and whether the end of the sample block is reached until the C RF measurement is less than or equal to the signal power threshold (i.e., determination block530=“No”) or the end of the sample block is reached (i.e., determination block532=“Yes”). If the end of the sample block is reached (i.e., determination block532=“Yes”), as discussed above in block534the cross block flag may be set (or verified and remain set if already set) and in block535the C RF measurement may be stored in the sample. If the end of the sample block is reached (i.e., determination block532=“Yes”) and in block534the cross block flag is set, referring toFIG.5A, the processor may perform operations of blocks506,508,510,512,516, and518as discussed above. If the average of the sample block is less than the noise floor average (i.e., determination block510=“No”) and the cross block flag is set (i.e., determination block514=“Yes”), the C RF measurement is less than or equal to the signal power threshold (i.e., determination block516=“No”) and the cross block flag is set (i.e., determination block517=“Yes”), or the C RF measurement is less than or equal to the signal power threshold (i.e., determination block516=“No”), referring toFIG.5B, in block538the processor214may set the sample stop. As an example, the processor214may indicate that a sample end is reached in a memory and/or that a sample is complete in a memory. In block540the processor214may compute and store complex I and Q data for the stored measurements in the sample. In block542the processor214may determine a mean of the complex I and Q data. Referring toFIG.5C, in determination block544the processor214may determine whether the mean of the complex I and Q data is greater than a signal threshold. If the mean of the complex I and Q data is less than or equal to the signal threshold (i.e., determination block544=“No”), in block550the processor214may indicate the sample is noise and discard data associated with the sample from memory. If the mean is greater than the signal threshold (i.e., determination block544=“Yes”), in block546the processor214may identify the sample as a signal of interest. In an embodiment, the processor214may identify the sample as a signal of interest by assigning a signal identifier to the signal, such as a signal number or sample number. In block548the processor214may determine and store signal parameters for the signal. As an example, the processor214may determine and store a frequency peak of the identified signal, a peak power of the identified signal, an average power of the identified signal, a signal bandwidth of the identified signal, and/or a signal duration of the identified signal. In block552the processor214may clear the cross block flag (or verify that the cross block flag is unset). In block556the processor214may determine whether the end of the sample block is reached. If the end of the sample block is not reached (i.e., determination block556=“No”) in block558the processor214may increment the measurement counter (C), and referring toFIG.5Ain determination block516may determine whether the C RF measurement is greater than the signal power threshold. Referring toFIG.5C, if the end of the sample block is reached (i.e., determination block556=“Yes”), referring toFIG.5A, in block506the processor214may load the next available sample block. FIG.6illustrates a process flow for an embodiment method600for displaying signal identifications. In an embodiment, the operations of method600may be performed by a processor214of a spectrum management device202. In determination block602the processor214may determine whether a signal is identified. If a signal is not identified (i.e., determination block602=“No”), in block604the processor214may wait for the next scan. If a signal is identified (i.e., determination block602=“Yes”), in block606the processor214may compare the signal parameters of an identified signal to signal parameters in a history database232. In determination block608the processor214may determine whether signal parameters of the identified signal match signal parameters in the history database232. If there is no match (i.e., determination block608=“No”), in block610the processor214may store the signal parameters as a new signal in the history database232. If there is a match (i.e., determination block608=“Yes”), in block612the processor214may update the matching signal parameters as needed in the history database232. In block614the processor214may compare the signal parameters of the identified signal to signal parameters in a signal characteristic listing236. In an embodiment, the characteristic listing236may be a static database separate from the history database232, and the characteristic listing236may correlate signal parameters with signal identifications. In determination block616the processor214may determine whether the signal parameters of the identified signal match any signal parameters in the signal characteristic listing236. In an embodiment, the match in determination616may be a match based on a tolerance between the signal parameters of the identified signal and the parameters in the characteristic listing236. If there is a match (i.e., determination block616=“Yes”), in block618the processor214may indicate a match in the history database232and in block622may display an indication of the signal identification on a display242. As an example, the indication of the signal identification may be a display of the radio call sign of an identified FM radio station signal. If there is not a match (i.e., determination block616=“No”), in block620the processor214may display an indication that the signal is an unidentified signal. In this manner, the user may be notified a signal is present in the environment, but that the signal does not match to a signal in the characteristic listing. FIG.7illustrates a process flow of an embodiment method700for displaying one or more open frequency. In an embodiment, the operations of method700may be performed by the processor214of a spectrum management device202. In block702the processor214may determine a current location of the spectrum management device202. In an embodiment, the processor214may determine the current location of the spectrum management device202based on location inputs received from a location receiver212, such as GPS coordinates received from a GPS receiver212. In block704the processor214may compare the current location to the stored location value in the historical database232. As discussed above, the historical or history database232may be a database storing information about signals previously actually identified by the spectrum management device202. In determination block706the processor214may determine whether there are any matches between the location information in the historical database232and the current location. If there are no matches (i.e., determination block706=“No”), in block710the processor214may indicate incomplete data is available. In other words the spectrum data for the current location has not previously been recorded. If there are matches (i.e., determination block706=“Yes”), in optional block708the processor214may display a plot of one or more of the signals matching the current location. As an example, the processor214may compute the average frequency over frequency intervals across a given spectrum and may display a plot of the average frequency over each interval. In block712the processor214may determine one or more open frequencies at the current location. As an example, the processor214may determine one or more open frequencies by determining frequency ranges in which no signals fall or at which the average is below a threshold. In block714the processor214may display an indication of one or more open frequency on a display242of the spectrum management device202. FIG.8Ais a block diagram of a spectrum management device802according to an embodiment. Spectrum management device802is similar to spectrum management device202described above with reference toFIG.2A, except that spectrum management device802may include symbol module816and protocol module806enabling the spectrum management device802to identify the protocol and symbol information associated with an identified signal as well as protocol match module814to match protocol information. Additionally, the characteristic listing236of spectrum management device802may include protocol data804, hardware data808, environment data810, and noise data812and an optimization module818may enable the signal processor214to provide signal optimization parameters. The protocol module806may identify the communication protocol (e.g., LTE, CDMA, etc.) associated with a signal of interest. In an embodiment, the protocol module806may use data retrieved from the characteristic listing, such as protocol data804to help identify the communication protocol. The symbol detector module816may determine symbol timing information, such as a symbol rate for a signal of interest. The protocol module806and/or symbol module816may provide data to the comparison module222. The comparison module222may include a protocol match module814which may attempt to match protocol information for a signal of interest to protocol data804in the characteristic listing to identify a signal of interest. Additionally, the protocol module806and/or symbol module816may store data in the memory module226and/or history database232. In an embodiment, the protocol module806and/or symbol module816may use protocol data804and/or other data from the characteristic listing236to help identify protocols and/or symbol information in signals of interest. The optimization module818may gather information from the characteristic listing, such as noise figure parameters, antenna hardware parameters, and environmental parameters correlated with an identified signal of interest to calculate a degradation value for the identified signal of interest. The optimization module818may further control the display242to output degradation data enabling a user of the spectrum management device802to optimize a signal of interest. FIG.8Bis a schematic logic flow block diagram illustrating logical operations which may be performed by a spectrum management device according to an embodiment. Only those logical operations illustrated inFIG.8Bdifferent from those described above with reference toFIG.2Bwill be discussed. As illustrated inFIG.8B, as received time tracking850may be applied to the I and Q data from the receiver210. An additional buffer851may further store the I and Q data received and a symbol detector852may identify the symbols of a signal of interest and determine the symbol rate. A multiple access scheme identifier module854may identify whether the signal is part of a multiple access scheme (e.g., CDMA), and a protocol identifier module856may attempt to identify the protocol the signal of interest is associated with. The multiple access scheme identifier module854and protocol identifier module856may retrieve data from the static database238to aid in the identification of the access scheme and/or protocol. The symbol detector module852may pass data to the signal parameters and protocols module858which may store protocol and symbol information in addition to signal parameter information for signals of interest. FIG.9illustrates a process flow of an embodiment method900for determining protocol data and symbol timing data. In an embodiment, the operations of method900may be performed by the processor214of a spectrum management device802. In determination block902the processor214may determine whether two or more signals are detected. If two or more signals are not detected (i.e., determination block902=“No”), in determination block902the processor214may continue to determine whether two or more signals are detected. If two or more signals are detected (i.e., determination block902=“Yes”), in determination block904the processor214may determine whether the two or more signals are interrelated. In an embodiment, a mean correlation value of the spectral decomposition of each signal may indicate the two or more signals are interrelated. As an example, a mean correlation of each signal may generate a value between 0.0 and 1, and the processor214may compare the mean correlation value to a threshold, such as a threshold of 0.75. In such an example, a mean correlation value at or above the threshold may indicate the signals are interrelated while a mean correlation value below the threshold may indicate the signals are not interrelated and may be different signals. In an embodiment, the mean correlation value may be generated by running a full energy bandwidth correlation of each signal, measuring the values of signal transition for each signal, and for each signal transition running a spectral correlation between signals to generate the mean correlation value. If the signals are not interrelated (i.e., determination block904=“No”), the signals may be two or more different signals, and in block907processor214may measure the interference between the two or more signals. In an optional embodiment, in optional block909the processor214may generate a conflict alarm indicating the two or more different signals interfere. In an embodiment, the conflict alarm may be sent to the history database and/or a display. In determination block902the processor214may continue to determine whether two or more signals are detected. If the two signal are interrelated (i.e., determination block904=“Yes”), in block905the processor214may identify the two or more signals as a single signal. In block906the processor214may combine signal data for the two or more signals into a signal single entry in the history database. In determination block908the processor214may determine whether the signals mean averages. If the mean averages (i.e., determination block908=“Yes”), the processor214may identify the signal as having multiple channels in block910. If the mean does not average (i.e., determination block908=“No”) or after identifying the signal as having multiple channels, in block914the processor214may determine and store protocol data for the signal. In block916the processor214may determine and store symbol timing data for the signal, and the method900may return to block902. FIG.10illustrates a process flow of an embodiment method1000for calculating signal degradation data. In an embodiment, the operations of method1000may be performed by the processor214of a spectrum management device202. In block1002the processor may detect a signal. In block1004the processor214may match the signal to a signal in a static database. In block1006the processor214may determine noise figure parameters based on data in the static database236associated with the signal. As an example, the processor214may determine the noise figure of the signal based on parameters of a transmitter outputting the signal according to the static database236. In block1008the processor214may determine hardware parameters associated with the signal in the static database236. As an example, the processor214may determine hardware parameters such as antenna position, power settings, antenna type, orientation, azimuth, location, gain, and equivalent isotropically radiated power (EIRP) for the transmitter associated with the signal from the static database236. In block1010processor214may determine environment parameters associated with the signal in the static database236. As an example, the processor214may determine environment parameters such as rain, fog, and/or haze based on a delta correction factor table stored in the static database and a provided precipitation rate (e.g., mm/hr). In block1012the processor214may calculate and store signal degradation data for the detected signal based at least in part on the noise figure parameters, hardware parameters, and environmental parameters. As an example, based on the noise figure parameters, hardware parameters, and environmental parameters free space losses of the signal may be determined. In block1014the processor214may display the degradation data on a display242of the spectrum management device202. In a further embodiment, the degradation data may be used with measured terrain data of geographic locations stored in the static database to perform pattern distortion, generate propagation and/or next neighbor interference models, determine interference variables, and perform best fit modeling to aide in signal and/or system optimization. FIG.11illustrates a process flow of an embodiment method1100for displaying signal and protocol identification information. In an embodiment, the operations of method1100may be performed by a processor214of a spectrum management device202. In block1102the processor214may compare the signal parameters and protocol data of an identified signal to signal parameters and protocol data in a history database232. In an embodiment, a history database232may be a database storing signal parameters and protocol data for previously identified signals. In block1104the processor214may determine whether there is a match between the signal parameters and protocol data of the identified signal and the signal parameters and protocol data in the history database232. If there is not a match (i.e., determination block1104=“No”), in block1106the processor214may store the signal parameters and protocol data as a new signal in the history database232. If there is a match (i.e., determination block1104=“Yes”), in block1108the processor214may update the matching signal parameters and protocol data as needed in the history database232. In block1110the processor214may compare the signal parameters and protocol data of the identified signal to signal parameters and protocol data in the signal characteristic listing236. In determination block1112the processor214may determine whether the signal parameters and protocol data of the identified signal match any signal parameters and protocol data in the signal characteristic listing236. If there is a match (i.e., determination block1112=“Yes”), in block1114the processor214may indicate a match in the history database and in block1118may display an indication of the signal identification and protocol on a display. If there is not a match (i.e., determination block1112=“No”), in block1116the processor214may display an indication that the signal is an unidentified signal. In this manner, the user may be notified a signal is present in the environment, but that the signal does not match to a signal in the characteristic listing. FIG.12Ais a block diagram of a spectrum management device1202according to an embodiment. Spectrum management device1202is similar to spectrum management device802described above with reference toFIG.8A, except that spectrum management device1202may include TDOA/FDOA module1204and modulation module1206enabling the spectrum management device1202to identify the modulation type employed by a signal of interest and calculate signal origins. The modulation module1206may enable the signal processor to determine the modulation applied to signal, such as frequency modulation (e.g., FSK, MSK, etc.) or phase modulation (e.g., BPSK, QPSK, QAM, etc.) as well as to demodulate the signal to identify payload data carried in the signal. The modulation module1206may use payload data1221from the characteristic listing to identify the data types carried in a signal. As examples, upon demodulating a portion of the signal the payload data may enable the processor214to determine whether voice data, video data, and/or text based data is present in the signal. The TDOA/FDOA module1204may enable the signal processor214to determine time difference of arrival for signals or interest and/or frequency difference of arrival for signals of interest. Using the TDOA/FDOA information estimates of the origin of a signal may be made and passed to a mapping module1225which may control the display242to output estimates of a position and/or direction of movement of a signal. FIG.12Bis a schematic logic flow block diagram illustrating logical operations which may be performed by a spectrum management device according to an embodiment. Only those logical operations illustrated inFIG.12Bdifferent from those described above with reference toFIG.8Bwill be discussed. A time tracking operation1250may be applied to the I and Q data from the receiver210, by a time tracking module, such as a TDOA/FDOA module. A magnitude squared1252operation may be performed on data from the symbol detector852to identify whether frequency or phase modulation is present in the signal. Phase modulated signals may be identified by the phase modulation1254processes and frequency modulated signals may be identified by the frequency modulation1256processes. The modulation information may be passed to a signal parameters, protocols, and modulation module1258. FIG.13illustrates a process flow of an embodiment method1300for estimating a signal origin based on a frequency difference of arrival. In an embodiment, the operations of method1300may be performed by a processor214of a spectrum management device1202. In block1302the processor214may compute frequency arrivals and phase arrivals for multiple instances of an identified signal. In block1304the processor214may determine frequency difference of arrival for the identified signal based on the computed frequency difference and phase difference. In block1306the processor may compare the determined frequency difference of arrival for the identified signal to data associated with known emitters in the characteristic listing to estimate an identified signal origin. In block1308the processor214may indicate the estimated identified signal origin on a display of the spectrum management device. As an example, the processor214may overlay the estimated origin on a map displayed by the spectrum management device. FIG.14illustrates a process flow of an embodiment method for displaying an indication of an identified data type within a signal. In an embodiment, the operations of method1400may be performed by a processor214of a spectrum management device1202. In block1402the processor214may determine the signal parameters for an identified signal of interest. In block1404the processor214may determine the modulation type for the signal of interest. In block1406the processor214may determine the protocol data for the signal of interest. In block1408the processor214may determine the symbol timing for the signal of interest. In block1410the processor214may select a payload scheme based on the determined signal parameters, modulation type, protocol data, and symbol timing. As an example, the payload scheme may indicate how data is transported in a signal. For example, data in over the air television broadcasts may be transported differently than data in cellular communications and the signal parameters, modulation type, protocol data, and symbol timing may identify the applicable payload scheme to apply to the signal. In block1412the processor214may apply the selected payload scheme to identify the data type or types within the signal of interest. In this manner, the processor214may determine what type of data is being transported in the signal, such as voice data, video data, and/or text based data. In block1414the processor may store the data type or types. In block1416the processor214may display an indication of the identified data types. FIG.15illustrates a process flow of an embodiment method1500for determining modulation type, protocol data, and symbol timing data. Method1500is similar to method900described above with reference toFIG.9, except that modulation type may also be determined. In an embodiment, the operations of method1500may be performed by a processor214of a spectrum management device1202. In blocks902,904,905,906,908, and910the processor214may perform operations of like numbered blocks of method900described above with reference toFIG.9. In block1502the processor may determine and store a modulation type. As an example, a modulation type may be an indication that the signal is frequency modulated (e.g., FSK, MSK, etc.) or phase modulated (e.g., BPSK, QPSK, QAM, etc.). As discussed above, in block914the processor may determine and store protocol data and in block916the processor may determine and store timing data. In an embodiment, based on signal detection, a time tracking module, such as a TDOA/FDOA module1204, may track the frequency repetition interval at which the signal is changing. The frequency repetition interval may also be tracked for a burst signal. In an embodiment, the spectrum management device may measure the signal environment and set anchors based on information stored in the historic or static database about known transmitter sources and locations. In an embodiment, the phase information about a signal be extracted using a spectral decomposition correlation equation to measure the angle of arrival (“AOA”) of the signal. In an embodiment, the processor of the spectrum management device may determine the received power as the Received Signal Strength (“RSS”) and based on the AOA and RSS may measure the frequency difference of arrival. In an embodiment, the frequency shift of the received signal may be measured and aggregated over time. In an embodiment, after an initial sample of a signal, known transmitted signals may be measured and compared to the RSS to determine frequency shift error. In an embodiment, the processor of the spectrum management device may compute a cross ambiguity function of aggregated changes in arrival time and frequency of arrival. In an additional embodiment, the processor of the spectrum management device may retrieve FFT data for a measured signal and aggregate the data to determine changes in time of arrival and frequency of arrival. In an embodiment, the signal components of change in frequency of arrival may be averaged through a Kalman filter with a weighted tap filter from 2 to 256 weights to remove measurement error such as noise, multipath interference, etc. In an embodiment, frequency difference of arrival techniques may be applied when either the emitter of the signal or the spectrum management device are moving or when then emitter of the signal and the spectrum management device are both stationary. When the emitter of the signal and the spectrum management device are both stationary the determination of the position of the emitter may be made when at least four known other known signal emitters positions are known and signal characteristics may be available. In an embodiment, a user may provide the four other known emitters and/or may use already in place known emitters, and may use the frequency, bandwidth, power, and distance values of the known emitters and their respective signals. In an embodiment, where the emitter of the signal or spectrum management device may be moving, frequency deference of arrival techniques may be performed using two known emitters. FIG.16illustrates an embodiment method for tracking a signal origin. In an embodiment, the operations of method1600may be performed by a processor214of a spectrum management device1202. In block1602the processor214may determine a time difference of arrival for a signal of interest. In block1604the processor214may determine a frequency difference of arrival for the signal interest. As an example, the processor214may take the inverse of the time difference of arrival to determine the frequency difference of arrival of the signal of interest. In block1606the processor214may identify the location. As an example, the processor214may determine the location based on coordinates provided from a GPS receiver. In determination block1608the processor214may determine whether there are at least four known emitters present in the identified location. As an example, the processor214may compare the geographic coordinates for the identified location to a static database and/or historical database to determine whether at least four known signals are within an area associated with the geographic coordinates. If at least four known emitters are present (i.e., determination block1608=“Yes”), in block1612the processor214may collect and measure the RSS of the known emitters and the signal of interest. As an example, the processor214may use the frequency, bandwidth, power, and distance values of the known emitters and their respective signals and the signal of interest. If less than four known emitters are present (i.e., determination block1608=“No”), in block1610the processor214may measure the angle of arrival for the signal of interest and the known emitter. Using the RSS or angle or arrival, in block1614the processor214may measure the frequency shift and in block1616the processor214may obtain the cross ambiguity function. In determination block1618the processor214may determine whether the cross ambiguity function converges to a solution. If the cross ambiguity function does converge to a solution (i.e., determination block1618=“Yes”), in block1620the processor214may aggregate the frequency shift data. In block1622the processor214may apply one or more filter to the aggregated data, such as a Kalman filter. Additionally, the processor214may apply equations, such as weighted least squares equations and maximum likelihood equations, and additional filters, such as a non-line-of-sight (“NLOS”) filters to the aggregated data. In an embodiment, the cross ambiguity function may resolve the position of the emitter of the signal of interest to within 3 meters. If the cross ambiguity function does not converge to a solution (i.e., determination block1618=“No”), in block1624the processor214may determine the time difference of arrival for the signal and in block1626the processor214may aggregate the time shift data. Additionally, the processor may filter the data to reduce interference. Whether based on frequency difference of arrival or time difference of arrival, the aggregated and filtered data may indicate a position of the emitter of the signal of interest, and in block1628the processor214may output the tracking information for the position of the emitter of the signal of interest to a display of the spectrum management device and/or the historical database. In an additional embodiment, location of emitters, time and duration of transmission at a location may be stored in the history database such that historical information may be used to perform and predict movement of signal transmission. In a further embodiment, the environmental factors may be considered to further reduce the measured error and generate a more accurate measurement of the location of the emitter of the signal of interest. The processor214of spectrum management devices202,802and1202may be any programmable microprocessor, microcomputer or multiple processor chip or chips that can be configured by software instructions (applications) to perform a variety of functions, including the functions of the various embodiments described above. In some devices, multiple processors may be provided, such as one processor dedicated to wireless communication functions and one processor dedicated to running other applications. Typically, software applications may be stored in the internal memory226or230before they are accessed and loaded into the processor214. The processor214may include internal memory sufficient to store the application software instructions. In many devices the internal memory may be a volatile or nonvolatile memory, such as flash memory, or a mixture of both. For the purposes of this description, a general reference to memory refers to memory accessible by the processor214including internal memory or removable memory plugged into the device and memory within the processor214itself. Identifying Devices in White Space. The present invention provides for systems, methods, and apparatus solutions for device sensing in white space, which improves upon the prior art by identifying sources of signal emission by automatically detecting signals and creating unique signal profiles. Device sensing has an important function and applications in military and other intelligence sectors, where identifying the emitter device is crucial for monitoring and surveillance, including specific emitter identification (SEI). At least two key functions are provided by the present invention: signal isolation and device sensing. Signal Isolation according to the present invention is a process whereby a signal is detected, isolated through filtering and amplification, amongst other methods, and key characteristics extracted. Device Sensing according to the present invention is a process whereby the detected signals are matched to a device through comparison to device signal profiles and may include applying a confidence level and/or rating to the signal-profile matching. Further, device sensing covers technologies that permit storage of profile comparisons such that future matching can be done with increased efficiency and/or accuracy. The present invention systems, methods, and apparatus are constructed and configured functionally to identify any signal emitting device, including by way of example and not limitation, a radio, a cell phone, etc. Regarding signal isolation, the following functions are included in the present invention: amplifying, filtering, detecting signals through energy detection, waveform-based, spectral correlation-based, radio identification-based, or matched filter method, identifying interference, identifying environmental baseline(s), and/or identify signal characteristics. Regarding device sensing, the following functions are included in the present invention: using signal profiling and/or comparison with known database(s) and previously recorded profile(s), identifying the expected device or emitter, stating the level of confidence for the identification, and/or storing profiling and sensing information for improved algorithms and matching. In preferred embodiments of the present invention, the identification of the at least one signal emitting device is accurate to a predetermined degree of confidence between about 80 and about 95 percent, and more preferably between about 80 and about 100 percent. The confidence level or degree of confidence is based upon the amount of matching measured data compared with historical data and/or reference data for predetermined frequency and other characteristics. The present invention provides for wireless signal-emitting device sensing in the white space based upon a measured signal, and considers the basis of license(s) provided in at least one reference database, preferably the federal communication commission (FCC) and/or other defined database including license listings. The methods include the steps of providing a device for measuring characteristics of signals from signal emitting devices in a spectrum associated with wireless communications, the characteristics of the measured data from the signal emitting devices including frequency, power, bandwidth, duration, modulation, and combinations thereof, making an assessment or categorization on analog and/or digital signal(s); determining the best fit based on frequency if the measured power spectrum is designated in historical and/or reference data, including but not limited to the FCC or other database(s) for select frequency ranges; determining analog or digital, based on power and sideband combined with frequency allocation; determining a TDM/FDM/CDM signal, based on duration and bandwidth; determining best modulation fit for the desired signal, if the bandwidth and duration match the signal database(s); adding modulation identification to the database; listing possible modulations with best percentage fit, based on the power, bandwidth, frequency, duration, database allocation, and combinations thereof; and identifying at least one signal emitting device from the composite results of the foregoing steps. Additionally, the present invention provides that the phase measurement of the signal is calculated between the difference of the end frequency of the bandwidth and the peak center frequency and the start frequency of the bandwidth and the peak center frequency to get a better measurement of the sideband drop off rate of the signal to help determine the modulation of the signal. In embodiments of the present invention, an apparatus is provided for automatically identifying devices in a spectrum, the apparatus including a housing, at least one processor and memory, and sensors constructed and configured for sensing and measuring wireless communications signals from signal emitting devices in a spectrum associated with wireless communications; and wherein the apparatus is operable to automatically analyze the measured data to identify at least one signal emitting device in near real time from attempted detection and identification of the at least one signal emitting device. The characteristics of signals and measured data from the signal emitting devices include frequency, power, bandwidth, duration, modulation, and combinations thereof. The present invention systems including at least one apparatus, wherein the at least one apparatus is operable for network-based communication with at least one server computer including a database, and/or with at least one other apparatus, but does not require a connection to the at least one server computer to be operable for identifying signal emitting devices; wherein each of the apparatus is operable for identifying signal emitting devices including: a housing, at least one processor and memory, and sensors constructed and configured for sensing and measuring wireless communications signals from signal emitting devices in a spectrum associated with wireless communications; and wherein the apparatus is operable to automatically analyze the measured data to identify at least one signal emitting device in near real time from attempted detection and identification of the at least one signal emitting device. Identifying Open Space in a Wireless Communication Spectrum. The present invention provides for systems, methods, and apparatus solutions for automatically identifying open space, including open space in the white space of a wireless communication spectrum. Importantly, the present invention identifies the open space as the space that is unused and/or seldomly used (and identifies the owner of the licenses for the seldomly used space, if applicable), including unlicensed spectrum, white space, guard bands, and combinations thereof. Method steps of the present invention include: automatically obtaining a listing or report of all frequencies in the frequency range; plotting a line and/or graph chart showing power and bandwidth activity; setting frequencies based on a frequency step and/or resolution so that only user-defined frequencies are plotted; generating files, such as by way of example and not limitation, .csv or .pdf files, showing average and/or aggregated values of power, bandwidth and frequency for each derived frequency step; and showing an activity report over time, over day vs. night, over frequency bands if more than one, in white space if requested, in Industrial, Scientific, and Medical (ISM) band or space if requested; and if frequency space is seldomly in that area, then identify and list frequencies and license holders. Additional steps include: automatically scanning the frequency span, wherein a default scan includes a frequency span between about 54 MHz and about 804 MHz; an ISM scan between about 900 MHz and about 2.5 GHz; an ISM scan between about 5 GHz and about 5.8 GHz; and/or a frequency range based upon inputs provided by a user. Also, method steps include scanning for an allotted amount of time between a minimum of about 15 minutes up to about 30 days; preferably scanning for allotted times selected from the following: a minimum of about 15 minutes; about 30 minutes; about 1 hour increments; about 5 hour increments; about 10 hour increments; about 24 hours; about 1 day; and about up to 30 days; and combinations thereof. In preferred embodiments, if the apparatus is configured for automatically scanning for more than about 15 minutes, then the apparatus is preferably set for updating results, including updating graphs and/or reports for an approximately equal amount of time (e.g., every 15 minutes). The systems, methods, and apparatus also provide for automatically calculating a percent activity associated with the identified open space on predetermined frequencies and/or ISM bands. Signal Database. Preferred embodiments of the present invention provide for sensed and/or measured data received by the at least one apparatus of the present invention, analyzed data, historical data, and/or reference data, change-in-state data, and any updates thereto, are storable on each of the at least one apparatus. In systems of the present invention, each apparatus further includes transmitters for sending the sensed and/or measured data received by the at least one apparatus of the present invention, analyzed data, historical data, and/or reference data, change-in-state data, and any updates thereto, are communicated via the network to the at least one remote server computer and its corresponding database(s). Preferably, the server(s) aggregate the data received from the multiplicity of apparatus or devices to produce a composite database for each of the types of data indicated. Thus, while each of the apparatus or devices is fully functional and self-contained within the housing for performing all method steps and operations without network-based communication connectivity with the remote server(s), when connected, as illustrated inFIG.29, the distributed devices provide the composite database, which allows for additional analytics not possible for individual, isolated apparatus or device units (when not connected in network-based communication), which solves a longstanding, unmet need. In particular, the aggregation of data from distributed, different apparatus or device units allow for comparison of sample sets of data to compare signal data or information for similar factors, including time(s), day(s), venues, geographic locations or regions, situations, activities, etc., as well as for comparing various signal characteristics with the factors, wherein the signal characteristics and their corresponding sensed and/or measured data, including raw data and change-in-state data, and/or analyzed data from the signal emitting devices include frequency, power, bandwidth, duration, modulation, and combinations thereof. Preferably, the comparisons are conducted in near real time. The aggregation of data may provide for information about the same or similar mode from apparatus to apparatus, scanning the same or different frequency ranges, with different factors and/or signal characteristics received and stored in the database(s), both on each apparatus or device unit, and when they are connected in network-based communication for transmission of the data to the at least one remote server. The aggregation of data from a multiplicity of units also advantageously provides for continuous, 24 hours/7 days per week scanning, and allows the system to identify sections that exist as well as possibly omitted information or lost data, which may still be considered for comparisons, even if it is incomplete. From a time standpoint, there may not be a linearity with respect to when data is collected or received by the units; rather, the systems and methods of the present invention provide for automated matching of time, i.e., matching time frames and relative times, even where the environment, activities, and/or context may be different for different units. By way of example and not limitation, different units may sense and/or measure the same signal from the same signal emitting device in the spectrum, but interference, power, environmental factors, and other factors may present identification issues that preclude one of the at last one apparatus or device units from determining the identity of the signal emitting device with the same degree of certainty or confidence. The variation in this data from a multiplicity of units measuring the same signals provides for aggregation and comparison at the remote server using the distributed databases from each unit to generate a variance report in near real time. Thus, the database(s) provide repository database in memory on the apparatus or device units, and/or data from a multiplicity of units are aggregated on at least one remote server to provide an active network with distributed nodes over a region that produce an active or dynamic database of signals, identified devices, identified open space, and combinations thereof, and the nodes may report to or transmit data via network-based communication to a central hub or server. This provides for automatically comparing signal emitting devices or their profiles and corresponding sensed or measured data, situations, activities, geographies, times, days, and/or environments, which provides unique composite and comparison data that may be continuously updated. FIG.29shows a schematic diagram illustrating aspects of the systems, methods and apparatus according to the present invention. Each node includes an apparatus or device unit, referenced in theFIG.29as “SigSet Device A”, “SigSet Device B”, “SigSet Device C”, and through “SigSet Device N” that are constructed and configured for selective exchange, both transmitting and receiving information over a network connection, either wired or wireless communications, with the master SigDB or database at a remote server location from the units. Furthermore, the database aggregating nodes of the apparatus or device units provide a baseline compared with new data, which provide for near real time analysis and results within each of the at least one apparatus or device unit, which calculates and generates results such as signal emitting device identification, identification of open space, signal optimization, and combinations thereof, based upon the particular settings of each of the at least one apparatus or device unit. The settings include frequency ranges, location and distance from other units, difference in propagation from one unit to another unit, and combinations thereof, which factor into the final results. The present invention systems, methods, and apparatus embodiments provide for leveraging the use of deltas or differentials from the baseline, as well as actual data, to provide onsite sensing, measurement, and analysis for a given environment and spectrum, for each of the at least one apparatus or device unit. Because the present invention provides the at least one processor on each unit to compare signals and signal characteristic differences using compressed data for deltas to provide near real time results, the database storage may further be optimized by storing compressed data and/or deltas, and then decompressing and/or reconstructing the actual signals using the deltas and the baseline. Analytics are also provided using this approach. So then the signals database(s) provide for reduced data storage to the smallest sample set that still provides at least the baseline and the deltas to enable signal reconstruction and analysis to produce the results described according to the present invention. Preferably, the modeling and virtualization analytics enabled by the databases on each of the at least one apparatus or device units independently of the remote server computer, and also provided on the remote server computer from aggregated data, provide for “gap filling” for omitted or absent data, and or for reconstruction from deltas. A multiplicity of deltas may provide for signal identification, interference identification, neighboring band identification, device identification, signal optimization, and combinations, all in near real time. Significantly, the deltas approach of the present invention which provide for minimization of data sets or sample data sets required for comparisons and/or analytics, i.e., the smallest range of time, frequency, etc. that captures all representative signals and/or deltas associated with the signals, environment conditions, noise, etc. The signal database(s) may be represented with visual indications including diagrams, graphs, plots, tables, and combinations thereof, which may be presented directly by the apparatus or device unit to its corresponding display contained within the housing. Also, the signals database(s) provide each apparatus or device unit to receive a first sample data set in a first time period, and receive a second sample data set in a second time period, and receive a N sample data set in a corresponding N time period; to save or store each of the at least two distinct sample data sets; to automatically compare the at least two sample data sets to determine a change-in-state or “delta”. Preferably, the database receives and stores at least the first of the at least two data sets and also stores the delta. The stored delta values provide for quick analytics and regeneration of the actual values of the sample sets from the delta values, which advantageously contributes to the near real time results of the present invention. In preferred embodiments of the present invention, the at least one apparatus is continuously scanning the environment for signals, deltas from prior at least one sample data set, and combinations, which are categorized, classified, and stored in memory. The systems, methods and apparatus embodiments of the present invention include hardware and software components and requirements to provide for each of the apparatus units to connect and communicate different data they sense, measure, analyze, and/or store on local database(s) in memory on each of the units with the remote server computer and database. Thus the master database or “SigDB” is operable to be applied and connect to the units, and may include hardware and software commercially available, for example SQL Server 2012, and to be applied to provide a user the criteria to upgrade/update their current sever network to the correct configuration that is required to operate and access the SigDB. Also, the SigDB is preferably designed, constructed and as a full hardware and software system configuration for the user, including load testing and network security and configuration. Other exemplary requirements include that the SigDB will include a database structure that can sustain a multiplicity of apparatus units' information; provide a method to update the FCC database and/or historical database according a set time (every month/quarter/week, etc.), and in accordance with changes to the FCC.gov databases that are integrated into the database; operable to receive and to download unit data from a remote location through a network connection; be operable to query apparatus unit data stored within the SigDB database server and to query apparatus unit data in ‘present’ time to a particular apparatus unit device for a given ‘present’ time not available in the current SigDB server database; update this information into its own database structure; to keep track of Device Identifications and the information each apparatus unit is collecting including its location; to query the apparatus units based on Device ID or location of device or apparatus unit; to connect to several devices and/or apparatus units on a distributed communications network; to partition data from each apparatus unit or device and differentiate the data from each based on its location and Device ID; to join queries from several devices if a user wants to know information acquired from several remote apparatus units at a given time; to provide ability for several users (currently up to 5 per apparatus unit or device) to query information from the SigDB database or apparatus unit or device; to grant access permissions to records for each user based on device ID, pertinent information or tables/location; to connect to a user GUI from a remote device such as a workstation or tablet PC from a Web App application; to retrieve data queries based on user information and/or jobs; to integrate database external database information from the apparatus units; and combinations thereof. Also, in preferred embodiments, a GUI interface based on a Web Application software is provided; in one embodiment, the SigDB GUI is provided in any appropriate software, such as by way of example, in Visual Studio using .Net/Asp.Net technology or JavaScript. In any case, the SigDB GUI preferably operates across cross platform systems with correct browser and operating system (OS) configuration; provides the initial requirements of a History screen in each apparatus unit to access sever information or query a remote apparatus unit containing the desired user information; and, generates .csv and .pdf reports that are useful to the user. Automated Reports and Visualization of Analytics. Various reports for describing and illustrating with visualization the data and analysis of the device, system and method results from spectrum management activities include at least reports on power usage, RF survey, and/or variance, as well as interference detection, intermodulation detection, uncorrelated licenses, and/or open space identification. The systems, methods, and devices of the various embodiments enable spectrum management by identifying, classifying, and cataloging signals of interest based on radio frequency measurements. In an embodiment, signals and the parameters of the signals may be identified and indications of available frequencies may be presented to a user. In another embodiment, the protocols of signals may also be identified. In a further embodiment, the modulation of signals, devices or device types emitting signals, data types carried by the signals, and estimated signal origins may be identified. Referring again to the drawings,FIG.17is a schematic diagram illustrating an embodiment for scanning and finding open space. A plurality of nodes are in wireless or wired communication with a software defined radio, which receives information concerning open channels following real-time scanning and access to external database frequency information. FIG.18is a diagram of an embodiment of the invention wherein software defined radio nodes are in wireless or wired communication with a master transmitter and device sensing master. FIG.19is a process flow diagram of an embodiment method of temporally dividing up data into intervals for power usage analysis and comparison. The data intervals are initially set to seconds, minutes, hours, days and weeks, but can be adjusted to account for varying time periods (e.g., if an overall interval of data is only a week, the data interval divisions would not be weeks). In one embodiment, the interval slicing of data is used to produce power variance information and reports. FIG.20is a flow diagram illustrating an embodiment wherein frequency to license matching occurs. In such an embodiment the center frequency and bandwidth criteria can be checked against a database to check for a license match. Both licensed and unlicensed bands can be checked against the frequencies, and, if necessary, non-correlating factors can be marked when a frequency is uncorrelated. FIG.21is a flow diagram illustrating an embodiment method for reporting power usage information, including locational data, data broken down by time intervals, frequency and power usage information per band, average power distribution, propagation models, atmospheric factors, which is capable of being represented graphical, quantitatively, qualitatively, and overlaid onto a geographic or topographic map. FIG.22is a flow diagram illustrating an embodiment method for creating frequency arrays. For each initialization, an embodiment of the invention will determine a center frequency, bandwidth, peak power, noise floor level, resolution bandwidth, power and date/time. Start and end frequencies are calculated using the bandwidth and center frequency and like frequencies are aggregated and sorted in order to produce a set of frequency arrays matching power measurements captured in each band. FIG.23is a flow diagram illustrating an embodiment method for reframe and aggregating power when producing frequency arrays. FIG.24is a flow diagram illustrating an embodiment method of reporting license expirations by accessing static or FCC databases. FIG.25is a flow diagram illustrating an embodiment method of reporting frequency power use in graphical, chart, or report format, with the option of adding frequencies from FCC or other databases. FIG.26is a flow diagram illustrating an embodiment method of connecting devices. After acquiring a GPS location, static and FCC databases are accessed to update license information, if available. A frequency scan will find open spaces and detect interferences and/or collisions. Based on the master device ID, set a random generated token to select channel form available channel model and continually transmit ID channel token. If node device reads ID, it will set itself to channel based on token and device will connect to master device. Master device will then set frequency and bandwidth channel. For each device connected to master, a frequency, bandwidth, and time slot in which to transmit is set. In one embodiment, these steps can be repeated until the max number of devices is connected. As new devices are connected, the device list is updated with channel model and the device is set as active. Disconnected devices are set as inactive. If collision occurs, update channel model and get new token channel. Active scans will search for new or lost devices and update devices list, channel model, and status accordingly. Channel model IDs are actively sent out for new or lost devices. FIG.27is a flow diagram illustrating an embodiment method of addressing collisions. FIG.28is a schematic diagram of an embodiment of the invention illustrating a virtualized computing network and a plurality of distributed devices.FIG.28is a schematic diagram of one embodiment of the present invention, illustrating components of a cloud-based computing system and network for distributed communication therewith by mobile communication devices.FIG.28illustrates an exemplary virtualized computing system for embodiments of the present invention loyalty and rewards platform. As illustrated inFIG.28, a basic schematic of some of the key components of a virtualized computing (or cloud-based) system according to the present invention are shown. The system2800comprises at least one remote server computer2810with a processing unit2811and memory. The server2810is constructed, configured and coupled to enable communication over a network2850. The server provides for user interconnection with the server over the network with the at least one apparatus as described herein above2840positioned remotely from the server. Apparatus2840includes a memory2846, a CPU2844, an operating system2847, a bus2842, a input/output module2848, and an output or display2849. Furthermore, the system is operable for a multiplicity of devices or apparatus embodiments2860,2870for example, in a client/server architecture, as shown, each having outputs or displays2869and2979, respectively. Alternatively, interconnection through the network2850using the at least one device or apparatus for measuring signal emitting devices, each of the at least one apparatus is operable for network-based communication. Also, alternative architectures may be used instead of the client/server architecture. For example, a computer communications network, or other suitable architecture may be used. The network2850may be the Internet, an intranet, or any other network suitable for searching, obtaining, and/or using information and/or communications. The system of the present invention further includes an operating system2812installed and running on the at least one remote server2810, enabling the server2810to communicate through network2850with the remote, distributed devices or apparatus embodiments as described herein above, the server2810having a memory2820. The operating system may be any operating system known in the art that is suitable for network communication. FIG.29shows a schematic diagram of aspects of the present invention. FIG.30is a schematic diagram of an embodiment of the invention illustrating a computer system, generally described as3800, having a network3810and a plurality of computing devices3820,3830,3840. In one embodiment of the invention, the computer system3800includes a cloud-based network3810for distributed communication via the network's wireless communication antenna3812and processing by a plurality of mobile communication computing devices3830. In another embodiment of the invention, the computer system3800is a virtualized computing system capable of executing any or all aspects of software and/or application components presented herein on the computing devices3820,3830,3840. In certain aspects, the computer system3800may be implemented using hardware or a combination of software and hardware, either in a dedicated computing device, or integrated into another entity, or distributed across multiple entities or computing devices. By way of example, and not limitation, the computing devices3820,3830,3840are intended to represent various forms of digital devices3820,3840and mobile devices3830, such as a server, blade server, mainframe, mobile phone, a personal digital assistant (PDA), a smart phone, a desktop computer, a netbook computer, a tablet computer, a workstation, a laptop, and other similar computing devices. The components shown here, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the invention described and/or claimed in this document. In one embodiment, the computing device3820includes components such as a processor3860, a system memory3862having a random access memory (RAM)3864and a read-only memory (ROM)3866, and a system bus3868that couples the memory3862to the processor3860. In another embodiment, the computing device3830may additionally include components such as a storage device3890for storing the operating system3892and one or more application programs3894, a network interface unit3896, and/or an input/output controller3898. Each of the components may be coupled to each other through at least one bus3868. The input/output controller3898may receive and process input from, or provide output to, a number of other devices3899, including, but not limited to, alphanumeric input devices, mice, electronic styluses, display units, touch screens, signal generation devices (e.g., speakers) or printers. By way of example, and not limitation, the processor3860may be a general-purpose microprocessor (e.g., a central processing unit (CPU)), a graphics processing unit (GPU), a microcontroller, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a Programmable Logic Device (PLD), a controller, a state machine, gated or transistor logic, discrete hardware components, or any other suitable entity or combinations thereof that can perform calculations, process instructions for execution, and/or other manipulations of information. In another implementation, shown inFIG.30, a computing device3840may use multiple processors3860and/or multiple buses3868, as appropriate, along with multiple memories3862of multiple types (e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core). Also, multiple computing devices may be connected, with each device providing portions of the necessary operations (e.g., a server bank, a group of blade servers, or a multi-processor system). Alternatively, some steps or methods may be performed by circuitry that is specific to a given function. According to various embodiments, the computer system3800may operate in a networked environment using logical connections to local and/or remote computing devices3820,3830,3840through a network3810. A computing device3830may connect to a network3810through a network interface unit3896connected to the bus3868. Computing devices may communicate communication media through wired networks, direct-wired connections or wirelessly such as acoustic, RF or infrared through a wireless communication antenna3897in communication with the network's wireless communication antenna3812and the network interface unit3896, which may include digital signal processing circuitry when necessary. The network interface unit3896may provide for communications under various modes or protocols. In one or more exemplary aspects, the instructions may be implemented in hardware, software, firmware, or any combinations thereof. A computer readable medium may provide volatile or non-volatile storage for one or more sets of instructions, such as operating systems, data structures, program modules, applications or other data embodying any one or more of the methodologies or functions described herein. The computer readable medium may include the memory3862, the processor3860, and/or the storage device3890and may be a single medium or multiple media (e.g., a centralized or distributed computer system) that store the one or more sets of instructions3900. Non-transitory computer readable media includes all computer readable media, with the sole exception being a transitory, propagating signal per se. The instructions3900may further be transmitted or received over the network3810via the network interface unit3896as communication media, which may include a modulated data signal such as a carrier wave or other transport mechanism and includes any delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics changed or set in a manner as to encode information in the signal. Storage devices3890and memory3862include, but are not limited to, volatile and non-volatile media such as cache, RAM, ROM, EPROM, EEPROM, FLASH memory or other solid state memory technology, disks or discs (e.g., digital versatile disks (DVD), HD-DVD, BLU-RAY, compact disc (CD), CD-ROM, floppy disc) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store the computer readable instructions and which can be accessed by the computer system3800. It is also contemplated that the computer system3800may not include all of the components shown inFIG.30, may include other components that are not explicitly shown inFIG.30, or may utilize an architecture completely different than that shown inFIG.30. The various illustrative logical blocks, modules, elements, circuits, and algorithms described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application (e.g., arranged in a different order or partitioned in a different way), but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention. The present invention further provides for aggregating data from at least two apparatus units by at least one server computer and storing the aggregated data in a database and/or in at least one database in a cloud-based computing environment or virtualized computing environment, as illustrated inFIG.28orFIG.30. The present invention further provides for remote access to the aggregated data and/or data from any of the at least one apparatus unit, by distributed remote user(s) from corresponding distributed remote device(s), such as by way of example and not limitation, desktop computers, laptop computers, tablet computers, mobile computers with wireless communication operations, smartphones, mobile communications devices, and combinations thereof. The remote access to data is provided by software applications operable on computers directly (as a “desktop” application) and/or as a web service that allows user interface to the data through a secure, network-based website access. In other embodiments of the present invention, which include the base invention described hereinabove, and further including the functions of machine “learning”, modulation detection, automatic signal detection (ASD), FFT replay, and combinations thereof. Automatic modulation detection and machine “learning” includes automatic signal variance determination by at least one of the following methods: date and time from location set, and remote access to the apparatus unit to determine variance from different locations and times, in addition to the descriptions of automatic signal detection and threshold determination and setting. Environments vary, especially where there are many signals, noise, interference, variance, etc., so tracking signals automatically is difficult, and a longstanding, unmet need in the prior art. The present invention provides for automatic signal detection using a sample of measured and sensed data associated with signals over time using the at least one apparatus unit of the present invention to provide an automatically adjustable and adaptable system. For each spectrum scan, the data is automatically subdivided into “windows”, which are sections or groups of data within a frequency space. Real-time processing of the measured and sensed data on the apparatus unit(s) or devices combined with the windowing effect provides for automatic comparison of signal versus noise within the window to provide for noise approximation, wherein both signals and noise are measured and sensed, recorded, analyzed compared with historical data to identify and output signals in a high noise environment. It is adaptive and iterative to include focused windows and changes in the window or frequency ranges grouped. The resulting values for all data are squared in the analysis, which results in signals identified easily by the apparatus unit as having significantly larger power values compared with noise; additional analytics provide for selection of the highest power value signals and review of the original data corresponding thereto. Thus, the at least one apparatus automatically determines and identifies signals compared to noise in the RF spectrum. The apparatus unit or device of the present invention further includes a temporal anomaly detector (or “learning channel”). The first screen shot illustrated inFIG.31shows the blank screen, the second screen shot illustrated inFIG.32shows several channels that the system has “learned”. This table can be saved to disk as a spreadsheet and reused on subsequent surveys at the same location. The third screen shot shown inFIG.33displays the results when run with the “Enable OOB Signals” button enabled. In this context GOB means “Out Of Band” or rogue or previously unidentified signals. Once a baseline set of signals has been learned by the system, it can be used with automatic signal detection to clearly show new, unknown signals that were not present when the initial learning was done as shown inFIG.34. In a similar capacity, the user can load a spreadsheet that they have constructed on their own to describe the channels that they expect to see in a given environment, as illustrated inFIG.34. When run with OOB detection, the screen shot shows the detection of signals that were not in the user configuration. These rogue signals could be a possible source of interference, and automatic detection of them can greatly assist the job of an RF Manager. FIGS.31-34illustrate the functions and features of the present invention for automatic or machine “learning” as described hereinabove. Automatic signal detection of the present invention eliminates the need for a manual setting of a power threshold line or bar, as with the prior art. The present invention does not require a manual setting of power threshold bar or flat line to identify signals instead of noise, instead it uses information learned directly from the changing RF environment to identify signals. Thus, the apparatus unit or device may be activated and left unattended to collect data continuously without the need for manual interaction with the device directly. Furthermore, the present invention allows remote viewing of live data in real time on a display of a computer or communications device in network-based connection but remotely positioned from the apparatus unit or device, and/or remote access to device settings, controls, data, and combinations thereof. The network-based communication may be selected from mobile, satellite, Ethernet, and functional equivalents or improvements with security including firewalls, encryption of data, and combinations thereof. Regarding FFT replay, the present invention apparatus units are operable to replay data and to review and/or replay data saved based upon an unknown event, such as for example and not limitation, reported alarms and/or unique events, wherein the FFT replay is operable to replay stored sensed and measured data to the section of data nearest the reported alarm and/or unique event. By contrast, prior art provides for recording signals on RF spectrum measurement devices, which transmit or send the raw data to an external computer for analysis, so then it is impossible to replay or review specific sections of data, as they are not searchable, tagged, or otherwise sectioned into subgroups of data or stored on the device. Automatic Signal Detection The previous approach to ASD was to subtract a calibration vector from each FFT sample set (de-bias), then square each resulting value and look for concentrations of energy that would differentiate a signal from random baseline noise. The advantages of this approach are that, by the use of the calibration vector (which was created using the receiver itself with no antenna), variations in the baseline noise that are due to the characteristics of the receiver, front end filtering, attenuation and A/D converter hardware can be closed tracked. On most modern equipment, the designers take steps to keep the overall response flat, but there are those that do not.FIG.35is an example of a receiver that has marked variations on baseline behavior across a wide spectrum (9 MHz-6 GHz). The drawbacks to this approach are: 1) It requires the use of several “tuning” variables which often require the user to adjust and fiddle with in order to achieve good signal recognition. A fully automatic signal detection system should be able to choose values for these parameters without the intervention of an operator. 2) It does not take into account variations in the baseline noise floor that are introduced by RF energy in a live environment. Since these variations were not present during calibration, they are not part of the calibration vector and cannot be “canceled out” during the de-bias phase. Instead they remain during the square and detect phase, often being mistakenly classified as signal. An example of this isFIG.36, a normal spectrum from 700 MHz to 790 MHz. The threshold line (baby blue) indicates the level where signal can be differentiated from noise.FIG.37illustrates the same spectrum at a different time where an immensely powerful signal at about 785 MHz has caused undulations in the noise floor all the way down to 755 MHz. It is clear to see by the placement of the threshold line large blocks of the noise are now going to be recognized as signal. Not only are the 4 narrow band signals now going to be mistakenly seen as one large signal, there is an additional lump of noise around 760 MHz that represents no signal at all, but will be classified as such. In order to solve these two problems, and provide a fully automatic signal detection system, a new approach has been taken to prepare the calibration vector. The existing square and detect algorithm works well if the data are de-biased properly with a cleverly chosen calibration vector, it's just that the way the calibration vector was created was not sufficient. FIG.38illustrates a spectrum from 1.9 GHz to 2.0 GHz, along with some additional lines that indicate the functions of the new algorithm. The brown line at the bottom displays the existing calibration vector created by running the receiver with no antenna. It is clear to see that, if used as is, it is too low to be used to de-bias the data (the dark blue plot). Also, much of the elevations in noise floor will wind up being part of the signals that are detected. In order to compensate for this, the user was given a control (called “Bias”) that allowed them to raise or lower the calibration vector to hopefully achieve a more reasonable result. But, as illustrated inFIG.37, no adjustment will suffice when the noise floor has been distorted due to the injection of large amounts of energy. So, rather than attempt to make the calibration vector fit the data, the new approach examines the data itself in an attempt to use parts of it as the correction vector. This is illustrated by the light purple and baby blue lines in theFIG.38. The light purple line is the result of using a 60 sample smoothing filter to average the raw data. It clearly follows the data, but it removes the “jumpiness”. This can be better seen inFIG.39which is a close up view of the first part of the overall spectrum, showing the difference between the smoothed data (light purple) and the original data (dark blue). The new Gradient Detection algorithm is applied to the smoothed data to detect locations where the slope of the line changes quickly. In places where the slope changes quickly in a positive direction, the algorithm marks the start of a signal. On the other side of the signal the gradient again changes quickly to become more horizontal. At that point the algorithm determines it is the end of a signal. A second smoothing pass is performed on the smoothed data, but this time, those values that fall between the proposed start and end of signal are left out of the average. The result of this is the baby blue line, which is then used as the new calibration vector. This new calibration vector (baby blue line) is then used to de-bias the raw data which is then passed to the existing square and detect ASD algorithm. One of the other user-tunable parameters in the existing ASD system was called “Sensitivity”. This was a parameter that essentially set a threshold of energy, above which each FFT bin in a block of bins averaged together must exceed in order for that block of bins to be considered a signal. In this way, rather than a single horizontal line to divide signal from noise, each signal can be evaluated individually, based on its average power. The effect of setting this value too low was that tiny fluctuations of energy that are actually noise would sometimes appear to be signals. Setting the value too high would result in the algorithm missing a signal. In order to automatically choose a value for this parameter, the new system uses a “Quality of Service” feedback from the Event Compositor, a module that processes the real-time events from the ASD system and writes signal observations into a database. When the sensitivity value is too low, the random bits of energy that ASD mistakenly sees as signal are very transient. This is due to the random nature of noise. The Event Compositor has a parameter called a “Pre-Recognition Delay” that sets the minimum number of consecutive scans that it must see a signal in order for it to be considered a candidate for a signal observation database entry (in order to catch large fast signals, an exception is made for large transients that are either high in peak power, or in bandwidth). Since the random fluctuations seldom persist for more than 1 or 2 sweeps, the Event Compositor ignores them, essentially filtering them out. If there are a large number of these transients, the Event Compositor provides feedback to the ASD module to inform it that its sensitivity is too low. Likewise, if there are no transients at all, the feedback indicates the sensitivity is too high. Eventually, the system arrives at an optimal setting for the sensitivity parameter. The result is a fully automated signal detection system that requires no user intervention or adjustment. The black brackets at the top ofFIG.38illustrate the signals recognized by the system, clearly indicating its accuracy. Because the system relies heavily upon averaging, a new algorithm was created that performs an N sample average in fixed time; i.e. regardless of the width of the average, N, each bin requires 1 addition, 1 subtraction, and 1 division. A simpler algorithm would require N additions and 1 division per bin of data. A snippet of the code is probably the best description:public double [ ] smoothingFilter(double [ ] dataSet, int filterSize) {double [ ] resultSet=new double[dataSet.length];double temp=0.0;int i=0;int halfSize=filterSize/2;for(i=0; i<filterSize; i++) {temp+=dataSet[i]; //load accumulator withthe first N/2 values.if(i<halfSize)resultSet[i]=dataSet[i];}for(i=halfSize; i<(dataSet.length—halfSize); i++) {resultSet[i]=temp/filterSize; //Compute the average and store ittemp −=dataSet[i−halfSize]; //take out the oldest valuetemp+=dataSet[i+halfSize]; //add in the newest value}while(i<dataSet.length) {resultSet[i]=dataSet[i]; i++;}return(resultSet);} Automatic Signal Detection (ASD) with Temporal Feature Extraction (TFE) The system in the present invention uses statistical learning techniques to observe and learn an RF environment over time and identify temporal features of the RF environment (e.g., signals) during a learning period. A knowledge map is formed based on learning data from a learning period. Real-time signal events are detected by an ASD system and scrubbed against the knowledge map to determine if the real-time signal events are typical and expected for the environment, or if there is any event not typical nor expected. The knowledge map consists of an array of normal distributions, where each distribution column is for each frequency bin of the FFT result set provided by a software defined radio (SDR). Each vertical column corresponds to a bell-shaped curve for that frequency. Each pixel represents a count of how many times that frequency was seen at that power level. A learning routine takes power levels of each frequency bin, uses the power levels as an index into each distribution column corresponding to each frequency bin, and increments the counter in a location corresponding to a power level. FIG.40illustrates a knowledge map obtained by a TFE process. The top window shows the result of real-time spectrum sweep of an environment. The bottom window shows a knowledge map, which color codes the values in each column (normal distribution) based on how often the power level of that frequency (column) has been at a particular level. The TFE function monitors its operation and produces a “settled percent.” The settled percent is the percentage of the values of the incoming FFT result set that the system has seen before. In this way, the system can know if it is ready to interpret the statistical data that it has obtained. Once it reaches a point where most of the FFT values have been seen before (99.95% or better), it can then perform an interpretation operation. FIG.41illustrates an interpretation operation based on a knowledge map. During the interpretation operation, the system extracts valuable signal identification from the knowledge map. Some statistical quantities are identified. For each column, the power level at which a frequency is seen the most is determined (peak of the distribution curve), which is represented by the red line inFIG.41. A desired percentage of power level values is located between the high and low boundaries of the power levels (shoulders of the curve), which are represented by white lines inFIG.41. The desired percentage is adjustable. InFIG.41, the desired percentage is set at 42% based on the learning data. In one embodiment, a statistical method is used to obtain a desirable percentage that provides the highest degree of “smoothness”—lowest deviation from column to column. Then, a profile is drawn based on the learning data, which represents the highest power level at which each frequency has been seen during learning. InFIG.41, the profile is represented by the green line. Gradient detection is then applied to the profile to identify areas of transition. An algorithm continues to accumulate a gradient value as long as the “step” from the previous cell to this cell is always non-zero and the same direction. When it arrives at a zero or different direction step, it evaluates the accumulated difference to see if it is significant, and if so, considers it a gradient. A transition is identified by a continuous change (from left to right) that exceeds the average range between the high and low boundaries of power levels (the white lines). Positive and negative gradients are matched, and the resulting interval is identified as a signal.FIG.42shows the identification of signals, which are represented by the black brackets above the knowledge display. FIG.43shows more details of the narrow band signals at the left of the spectrum around 400 MHz inFIG.42. The red cursor at 410.365 MHz inFIG.43points to a narrow band signal. The real-time spectrum sweep on the top window shows the narrow band signal, and the TFE process identifies the narrow band signal as well. To a prior art receiver, the narrow band signal hidden within a wideband signal is not distinguishable or detectable. The systems and methods and devices of the present invention are operable to scan a wideband with high resolution or high definition to identify channel divisions within a wideband, and identify narrowband signals hidden within the wideband signal, which are not a part of the wideband signal itself, i.e., the narrow band signals are not part of the bundled channels within the wideband signal. FIG.44shows more details of the two wide band signals around 750 MHz and a similar signal starting at 779 MHz. The present invention detects the most prominent parts of the signal starting at 779 MHz. The transmitters of these two wide band signals are actually in the distance, and normal signal detectors, which usually have a fixed threshold, are not able to pick up these two wide band signals but only see them as static noises. Because the TFE system in the present invention uses an aggregation of signal data over time, it can identify these signals and fine tune the ASD sensitivity of individual segments. Thus, the system in the present invention is able to detect signals that normal radio gear cannot. ASD in the present invention, is enhanced by the knowledge obtained by TFE and is now able to detect and record these signals where gradient detection alone would not have seen them. The threshold bar in the present invention is not fixed, but changeable. Also, at the red cursor inFIG.44is a narrow band signal in the distance that normally would not be detected because of its low power at the point of observation. But, the present invention interprets knowledge gained over time and is able to identify that signal. FIG.45illustrates the operation of the ASD in the present invention. The green line shows the spectrum data between 720 MHz and 791 MHz. 1stand 2ndderivatives of the power levels are calculated inside spectrum on a cell by cell basis, displayed as the overlapping blue and red lines at the top. The algorithm then picks the most prominent derivatives and performs a squaring function on them as displayed by the next red trace. The software then matches positive and negative gradients, to identify the edges of the signals, which are represented by the brackets on the top. Two wideband signals are identified, which may be CDMA, LTE, or other communication protocol used by mobile phones. The red line at the bottom is a baseline established by averaging the spectrum and removing areas identified by the gradients. At the two wideband signals, the red line is flat. By subtracting the baseline from the real spectrum data, groups of cells with average power above baseline are identified, and the averaging algorithm is run against those areas to apply the sensitivity measurement. The ASD system has the ability to distinguish between large eruptions of energy that increase the baseline noise and the narrow band signals that could normally be swamped by the additional energy because it generates its baseline from the spectrum itself and looks for relative gradients rather than absolute power levels. This baseline is then subtracted from the original spectrum data, revealing the signals, as displayed by the brackets at the top of the screen. Note that the narrow-band signals are still being detected (tiny brackets at the top that look more like dots) even though there is a hump of noise super-imposed on them. TFE is a learning process that augments the ASD feature in the present invention. The ASD system enhanced with TFE function in the present invention can automatically tune parameters based on a segmented basis, the sensitivity within an area is changeable. The TFE process accumulates small differences over time and signals become more and more apparent. In one embodiment, the TFE takes 40 samples per second over a 5-minute interval. The ASD system in the present invention is capable of distinguishing signals based on gradients from a complex and moving noise floor without a fixed threshold bar when collecting data from an environment. The ASD system with TFE function in the present invention is unmanned and water resistant. It runs automatically 24/7, even submerged in water. The TFE is also capable of detecting interferences and intrusions. In the normal environment, the TFE settles, interprets and identifies signals. Because it has a statistical knowledge of the RF landscape, it can tell the difference between a low power, wide band signal that it normally sees and a new higher power narrow band signal that may be an intruder. This is because it “scrubs” each of the FFT bins of each event that the ASD system detects against its knowledge base. When it detects that a particular group of bins in a signal from ASD falls outside the statistical range that those frequencies normally are observed, the system can raise an anomaly report. The TFE is capable of learning new knowledge, which is never seen before, from the signals identified by a normal detector. In one embodiment, a narrow band signal (e.g., a pit crew to car wireless signal) impinges on an LTE wideband signal, the narrow band signal may be right beside the wideband signal, or drift in and out of the wideband signal. On display, it just looks like an LTE wideband signal. For example, a narrow band signal with a bandwidth of 12 kHz or 25-30 kHz in a wideband signal with a bandwidth of 5 MHz over a 6 GHz spectrum just looks like a spike buried in the middle. But, because signals are characterized in real time against learned knowledge, the proposed ASD system with TFE function is able to pick out narrow band intruder immediately. The present invention is able to detect a narrow band signal with a bandwidth from 1-2 kHz to 60 kHz inside a wideband signal (e.g., with a bandwidth of 5 MHz) across a 6 GHz spectrum. InFIGS.40-45, the frequency resolution is 19.5 kHz, and a narrow band signal with a bandwidth of 2-3 kHz can be detected. The frequency resolution is based on the setting of the FFT result bin size. Statistical learning techniques are used for extracting temporal feature, creating a statistical knowledge map of what each frequency is and determining variations and thresholds and etc. The ASD system with TFE function in the present invention is capable of identifying, demodulating and decoding signals, both wideband and narrowband with high energy. If a narrowband signal is close to the end of wideband LTE signal, the wideband LTE signal is distorted at the edge. If multiple narrowband signals are within a wideband signal, the top edge of the wideband signal is ragged as the narrow band signal is hidden within the wide band signal. If one narrow band signal is in the middle of a wideband signal, the narrow band signal is usually interpreted as a cell within the wideband signal. However, the ASD system with TFE function in the present invention learns power levels in a spectrum section over time, and is able to recognize the narrow band signal immediately. The present invention is operable to log the result, display on a channel screen, notify operator and send alarms, etc. The present invention auto records spectrum, but does not record all the time. When a problem is identified, relevant information is auto recorded in high definition. The ASD system with TFE in the present invention is used for spectrum management. The system in the present invention is set up in a normal environment and starts learning and stores at least one learning map in it. The learning function of the ASD system in the present invention can be enabled and disabled. When the ASD system is exposed to a stable environment and has learned what is normal in the environment, it will stop its learning process. The environment is periodically reevaluated. The learning map is updated at a predetermined timeframe. After a problem is detected, the learning map will also be updated. The ASD system in the present invention can be deployed in stadiums, ports, airports, or on borders. In one embodiment, the ASD system learns and stores the knowledge in that environment. In another embodiment, the ASD system downloads prior knowledge and immediately displays it. In another embodiment, an ASD device can learn from other ASD devices globally. In operation, the ASD system then collects real time data and compares to the learning map stored for signal identification. Signals identified by the ASD system with TFE function may be determined to be an error by an operator. In that situation, an operator can manually edit or erase the error, essentially “coaching” the learning system. The systems and devices in the present invention create a channel plan based on user input, or external databases, and look for signals that are not there. Temporal Feature Extraction not only can define a channel plan based on what it learns from the environment, but it also “scrubs” each spectrum pass against the knowledge it has learned. This allows it to not only identify signals that violate a prescribed channel plan, but it can also discern the difference between a current signal, and the signal that it has previously seen in that frequency location. If there is a narrow band interference signal where there typically is a wide band signal, the system will identify it as an anomaly because it does not match the pattern of what is usually in that space. The device in the present invention is designed to be autonomous. It learns from the environment, and, without operator intervention, can detect anomalous signals that either were not there before, or have changed in power or bandwidth. Once detected, the device can send alerts by text or email and begin high resolution spectrum capture, or IQ capture of the signal of interest. FIG.40illustrates an environment in which the device is learning. There are some obvious signals, but there is also a very low level wide band signal between 746 MHz and 755 MHz. Typical threshold-oriented systems would not catch this. But, the TFE system takes a broader view over time. The signal does not have to be there all the time or be pronounced to be detected by the system. Each time it appears in the spectrum serves to reinforce the impression on the learning fabric. These impressions are then interpreted and characterized as signals. FIG.43shows the knowledge map that the device has acquired during its learning system, and shows brackets above what it has determined are signals. Note that the device has determined these signals on its own without any user intervention, or any input from any databases. It is a simple thing to then further categorize the signals by matching against databases, but what sets the device in the present invention apart is that, like its human counterpart, it has the ability to draw its own conclusions based on what it has seen. FIG.44shows a signal identified by the device in the present invention between 746 MHz and 755 MHz with low power levels. It is clear to see that, although the signal is barely distinguishable from the background noise, TFE clearly has identified its edges. Over to the far right is a similar signal that is further away so that it only presents traces of itself. But again, because the device in the present invention is trained to distinguish random and coherent energy patterns over time, it can clearly pick out the pattern of a signal. Just to the left of that faint signal was a transient narrow band signal at 777.653 MHz. This signal is only present for a brief period of time during the training, typically 0.5-0.7 seconds each instance, separated by minutes of silence, yet the device does not miss it, remembers those instances and categorizes them as a narrow band signal. The identification and classification algorithms that the system uses to identify Temporal Features are optimized to be used in real time. Notice that, even though only fragments of the low level wide band signal are detected on each sweep, the system still matches them with the signal that it had identified during its learning phase. Also as the system is running, it is scrubbing each spectral sweep against its knowledge map. When it finds coherent bundles of energy that are either in places that are usually quiet, or have higher power or bandwidth than it has seen before, it can automatically send up a red flag. Since the system is doing this in Real Time, it has critical relevance to those in harm's way—the first responder, or the war fighter who absolutely must have clear channels of communication or instant situational awareness of eminent threats. It's one thing to geolocate a signal that the user has identified. It's an entirely different dimension when the system can identify the signal on its own before the user even realizes it's there. Because the device in the present invention can pick out these signals with a sensitivity that is far superior to a simple threshold system, the threat does not have to present an obvious presence to be detected and alerted. Devices in prior art merely make it easy for a person to analyze spectral data, both in real time and historically, locally or remotely. But the device in the present invention operates as an extension of the person, performing the learning and analysis on its own, and even finding things that a human typically may miss. The device in the present invention can easily capture signal identifications, match them to databases, store and upload historical data. Moreover, the device has intelligence and the ability to be more than a simple data storage and retrieval device. The device is a watchful eye in an RF environment, and a partner to an operator who is trying to manage, analyze, understand and operate in the RF environment. Geolocation The prior art is dependent upon a synchronized receiver for power, phase, frequency, angle, and time of arrival, and an accurate clock for timing, and significantly, requires three devices to be used, wherein all are synchronized and include directional antennae to identify a signal with the highest power. Advantageously, the present invention does not require synchronization of receivers in a multiplicity of devices to provide geolocation of at least one apparatus unit or device or at least one signal, thereby reducing cost and improving functionality of each of the at least one apparatus in the systems described hereinabove for the present invention. Also, the present invention provides for larger frequency range analysis, and provides database(s) for capturing events, patterns, times, power, phase, frequency, angle, and combinations for the at least one signal of interest in the RF spectrum. The present invention provides for better measurements and data of signal(s) with respect to time, frequency with respect to time, power with respect to time, geolocation, and combinations thereof. In preferred embodiments of the at least one apparatus unit of the present invention, geolocation is provided automatically by the apparatus unit using at least one anchor point embedded within the system, by power measurements and transmission that provide for “known” environments of data. The known environments of data include measurements from the at least one anchorpoint that characterize the RF receiver of the apparatus unit or device. The known environments of data include a database including information from the FCC database and/or user-defined database, wherein the information from the FCC database includes at least maximum power based upon frequency, protocol, device type, and combinations thereof. With the geolocation function of the present invention, there is no requirement to synchronize receivers as with the prior art; the at least one anchorpoint and location of an apparatus unit provide the required information to automatically adjust to a first anchorpoint or to a second anchorpoint in the case of at least two anchorpoints, if the second anchorpoint is easier to adopt. The known environment data provide for expected spectrum and signal behavior as the reference point for the geolocation. Each apparatus unit or device includes at least one receiver for receiving RF spectrum and location information as described hereinabove. In the case of one receiver, it is operable with and switchable between antennae for receiving RF spectrum data and location data; in the case of two receivers, preferably each of the two receivers are housed within the apparatus unit or device. A frequency lock loop is used to determine if a signal is moving, by determining if there is a Doppler change for signals detected. Location determination for geolocation is provided by determining a point (x, y) or Lat Lon from the at least three anchor locations (x1, y1); (x2, y2); (x3, y3) and signal measurements at either of the node or anchors. Signal measurements provide a system of non-linear equations that must be solved for (x, y) mathematically; and the measurements provide a set of geometric shapes which intersect at the node location for providing determination of the node. For trilateration methods for providing observations to distances the following methods are used: R⁢S⁢S=d=d0⁢1⁢0⁢(P0-Pr1⁢0⁢n) wherein d0is the reference distance derived from the reference transmitter and signal characteristics (e.g., frequency, power, duration, bandwidth, etc.); P0is the power received at the reference distance; Pris the observed received power; and n is the path loss exponent; and Distance from observations is related to the positions by the following equations: d1=(√{square root over ((x−x1)2+(y−y1)2)}) d2=(√{square root over ((x−x2)2+(y−y2)2)}) d3=(√{square root over ((x−x3)2+(y−y3)2)}) Also, in another embodiment of the present invention, a geolocation application software operable on a computer device or on a mobile communications device, such as by way of example and not limitation, a smartphone, is provided. Method steps are illustrated in the flow diagram shown inFIG.46, including starting a geolocation app; calling active devices via a connection broker; opening spectrum display application; selecting at least one signal to geolocate; selecting at least three devices (or apparatus unit of the present invention) within a location or region, verifying that the devices or apparatus units are synchronized to a receiver to be geolocated; perform signal detection (as described hereinabove) and include center frequency, bandwidth, peak power, channel power, and duration; identify modulation of protocol type, obtain maximum, median, minimum and expected power; calculating distance based on selected propagation model; calculating distance based on one (1) meter path loss; calculating distance based on one (1) meter path loss model; calculating distance based on one (1) meter path loss model; perform circle transformations for each location; checking if RF propagation distances form circles that are fully enclosed; checking if RF propagation form circles that do not intersect; performing trilateration of devices; deriving z component to convert back to known GPS Lat Lon (latitude and longitude) coordinate; and making coordinates and set point as emitter location on mapping software to indicate the geolocation. The equations referenced inFIG.46are provided hereinbelow: Equation 1 for calculating distance based on selected propagation model: PLossExponent=(ParameterC-6.55*log 10(BS_AntHeight))/10MS_AntGainFunc=3.2*(log 10(11.75*MS_AntHeight))2-4.97 Constant(C)=ParameterA+ParameterB*log 10(Frequency)-13.82*log 10(BS_AntHeight)-MS_AntGainFunc DistanceRange=10((PLoss-PLossConstant)/10*PLossExponent)) Equation 2 for calculating distance based on 1 meter Path Loss Model (first device): d0=1;k=PLossExponent;PL_d=Pt+Gt−RSSI−TotalMarginPL_0=32.44+10*k*log 10(d0)+10*k*log 10(Frequency)D=d0*(10((PL_d-PL_0)/(10k))) Equation 3: (same as equation 2) for second device Equation 4: (same as equation 2) for third device Equation 5: Perform circle transformations for each location (x, y, z) Distance d; Verify ATA=0; where A={matrix of locations 1−N} in relation to distance; if not, then perform circle transformation check Equation 6: Perform trilateration of devices if more than three (3) devices aggregation and trilaterate by device; set circles to zero origin and solve from y=Ax where y=[x, y] locations [xy]=[2⁢(xa-xc)2⁢(ya-yc)2⁢(xb-xc)2⁢(yb-yc)]-1[xa2-xc2+ya2-yc2+dc2-da2xb2-xc2+yb2-yc2+dc2-db2]Equation⁢7 Note that check if RF propagation distances form circles where one or more circles are Fully Enclosed if it is based upon Mod Type and Power Measured, then Set Distance1of enclosed circle to Distance2minus the distance between the two points. Also, next, check to see if some of the RF Propagation Distances Form Circles, if they do not intersect, then if so based on Mod type and Max RF power Set Distance to each circle to Distance of Circle+(Distance between circle points−Sum of the Distances)/2 is used. Note that deriving z component to convert back to known GPS lat lon coordinate is provided by: z=sqrt(Dist2−x2−y2). Accounting for unknowns using Differential Received Signal Strength (DRSS) is provided by the following equation when reference or transmit power is unknown: didj=10(Prj-Pri10⁢n) And where signal strength measurements in dBm are provided by the following: Pr2(dBm)−Pr1(dBm)=10nlog10(√{square root over ((x−x1)2+(y−y)2)})−10nlog10(√{square root over ((x−x2)2+(y−y2)2)}) Pr3(dBm)−Pr1(dBm)=10nlog10(√{square root over ((x−x1)2+(y−y1)2)})−10nlog10(√{square root over ((x−x3)2+(y−y3)2)}) Pr2(dBm)−Pr3(dBm)=10nlog10(√{square root over ((x−x3)2+(y−y3)2)})−10nlog10(√{square root over ((x−x2)2+(y−y2)2)}) For geolocation systems and methods of the present invention, preferably two or more devices or units are used to provide nodes. More preferably, three devices or units are used together or “joined” to achieve the geolocation results. Also preferably, at least three devices or units are provided. Software is provided and operable to enable a network-based method for transferring data between or among the at least two device or units, or more preferably at least three nodes, a database is provided having a database structure to receive input from the nodes (transferred data), and at least one processor coupled with memory to act on the database for performing calculations, transforming measured data and storing the measured data and statistical data associated with it; the database structure is further designed, constructed and configured to derive the geolocation of nodes from saved data and/or from real-time data that is measured by the units; also, the database and application of systems and methods of the present invention provide for geolocation of more than one node at a time. Additionally, software is operable to generate a visual representation of the geolocation of the nodes as a point on a map location. Errors in measurements due to imperfect knowledge of the transmit power or antenna gain, measurement error due to signal fading (multipath), interference, thermal noise, no line of sight (NLOS) propagation error (shadowing effect), and/or unknown propagation model, are overcome using differential RSS measurements, which eliminate the need for transmit power knowledge, and can incorporate TDOA and FDOA techniques to help improve measurements. The systems and methods of the present invention are further operable to use statistical approximations to remove error causes from noise, timing and power measurements, multipath, and NLOS measurements. By way of example, the following methods are used for geolocation statistical approximations and variances: maximum likelihood (nearest neighbor or Kalman filter); least squares approximation; Bayesian filter if prior knowledge data is included; and the like. Also, TDOA and FDOA equations are derived to help solve inconsistencies in distance calculations. Several methods or combinations of these methods may be used with the present invention, since geolocation will be performed in different environments, including but not limited to indoor environments, outdoor environments, hybrid (stadium) environments, inner city environments, etc. In recent years, demand for real-time information has increased exponentially. Consumers have embraced social media applications and there are now more mobile subscriptions than people on the planet. Studies show that a typical mobile device experiences an average of 10 network interactions per minute (e.g., Facebook push, Twitter download). For example, Facebook on its own is driving 1 billion updates per minute. Rabid consumer demand, combined with the growing needs of government and industry (e.g., 2-way, trunked, IoT), translates into more wireless activities over wider frequency ranges. The activities are often intermittent with short durations of only a few hundred milliseconds. Social media applications and other cellular activities (e.g., background refresh) are even shorter in duration. Until now, the magnitude of activity has been impossible to keep track of and even harder to gain intelligence from. The present invention provides systems and methods for unmanned vehicle recognition. The present invention relates to automatic signal detection, temporal feature extraction, geolocation, and edge processing disclosed in U.S. patent application Ser. No. 15/412,982 filed Jan. 23, 2017, U.S. patent application Ser. No. 15/681,521 filed Aug. 21, 2017, U.S. patent application Ser. No. 15/681,540 filed Aug. 21, 2017, U.S. patent application Ser. No. 15/681,558 filed Aug. 21, 2017, each of which is incorporated herein by reference in their entirety. In one embodiment of the present invention, automatic signal detection in an RF environment is based on power distribution by frequency over time (PDFT), including the first derivative and the second derivative values. A PDFT processor is provided for automatic signal detection. In one embodiment, the PDFT processor increments power values in a 2-Dimensional (2D) array from a frequency spectrum over a set length of time. The length of time is user-settable. For example, the length of time can be set at 5 minutes, 1 hour, or 1 day. The length of time can be set as low as 1 second. Typically, the smallest time interval for setting the environment is 5 seconds. A histogram with frequency as the horizontal axis and power as the vertical axis can be used to describe power values across a spectrum during a certain period of time, which is called the Power Bin Occurrence (PBO). In one embodiment, power levels are collected for a specified length of time, and statistical calculations are performed on the PBO to obtain the power distribution by frequency for a certain time segment (PDFT). The statistical calculations create baseline signals and identify what is normal in an RF environment, and what are changes to the RF environment. PBO data is constantly updated and compared to baseline to detect anything unique in the RF environment. The PDFT collects power values and describes the RF environment with collected power values by frequency collected over the time range of the collection. For example, the PDFT processor learns what should be present in the RF environment in a certain area during the time segment from 3 pm to 5 pm. If there is a deviation from historical information, the PDFT processor is configured to send an alarm to operators. In one embodiment, PBO is used to populate a 3-Dimensional (3D) array and create the Second Order Power Bin Occurrence (SOPBO). The time segment of the PBO is a factor of the length of the SOPBO time segment. The first two dimensions are the same as in PBO, but the third dimension in SOPBO describes how often the corresponding frequency bin and power bin is populated over the SOPBO time segment. The result can be described as a collection of several 2D histograms across a percent of occurrence bins such that each histogram represents a different frequency bin and power bin combination. This provides a percentage of utilization of the frequency for non-constant signals such as RADAR, asynchronous data on demand links or push-to-talk voice. In one embodiment, the PBO, PDFT, and SOPBO data sets are used for signal detection. For example, statistical calculations of PBOs during a certain time segment are used along with a set of detection parameters to identify possible signals. A frequency-dependent noise floor is calculated by taking the spectral mean from the PDFT data and applying a type of median filter over subsets of frequency. For example, but not for limitation, detection parameters include known signals, basic characteristics, databases of telecom signals, and etc. For example, but not for limitation, median filter types include Standard Median Filter (MF), Weighted Median Filter (WMF), Adaptive Median Filter (AMF) and Decision Based Median Filter (DBMF). The noise floor is then assessed for large changes in power, which indicates the noise floor values are following the curvature of possible signals. At these frequencies, the noise floor is adjusted to adjacent values. Power values below the noise floor are ignored in the rest of the signal detection process. To detect signals, the first derivative is calculated from a smoothed PDFT frequency spectrum. Derivative values exceeding a threshold set based on the detection parameters are matched to nearby values along the frequency spectrum that are equal and opposite within a small uncertainty level. Once frequency edges are found, power values are used to further classify signals. The whole process including the noise floor calculation is repeated for different time segments. The detection parameters are adjusted over time based on signals found or not found, allowing the signal detection process to develop as the PDFT processor runs. The first derivative of the FFT data is used to detect signals, measure power, frequency and bandwidths of detected signals, determine noise floor and variations, and classify detected signals (e.g., wideband signals, narrowband signals). The second derivative of the FFT data is used to calculate velocity (i.e., change of power) and acceleration (i.e., rate of change of power), and identify movements based on changes and/or doppler effect. For example, the second derivative of the FFT data in an RF environment can be used to determine if a signal emitting device is near road or moving with a car. A SOPBO is the second derivative (i.e., a rate of change of power). The second derivative shows if a signal varies over time. In one embodiment, the power level of the signal varies over time. For example, a simplex network has base station signals transmitting at certain time segments and mobile signals in a different time segment. The SOPBO can catch the mobile signals while the first order PBO cannot. For signals that vary in time such as Time Division Duplex (TDD) LTE or a Radar, SOPBO is important. FIG.47illustrates a configuration of a PDFT processor according to one embodiment of the present invention. In one embodiment, a PDFT processor for automatic signal detection comprises a management plane, at least one RF receiver, a generator engine, and an analyzer engine. The management plane is operable to configure, monitor and manage job functions of the PDFT processor. The at least one RF receiver is operable to receive RF data, generate I/Q data based on the received RF data, and perform FFT analysis. The generator engine is configured to perform a PBO process, and generate PDFT data and SOPBO data based on PBO data. The analyzer engine is configured to calculate noise floor, smooth max hold, generate a PDFT baseline, and identify signals. The smooth max hold function is a curve fitting process with a partial differential equation to provide a running average across adjacent points to reject impulse noise that can be present in the FFT data. The analyzer engine is further configured to calculate a SOPBO baseline based on the SOPBO data. FIG.48is a flow chart for data processing in a PDFT processor according to one embodiment of the present invention. A job manifest is created for initial configuration of a PDFT generator engine or updating the configuration of the PDFT generator engine. The job manifest also starts an RF receiver to receive radio data from an RF environment. The received radio data is transmitted to an FFT engine for FFT analysis. The PDFT generator engine pulls FFT data stream from the FFT engine to build up a based PBO and run a PBO process continuously. An SOPBO process and a PDFT process are performed based on PBO data. SOPBO data from the SOPBO process and PDFT data from the PDFT process is published and saved to storage. The data from the PDFT generator engine is transmitted to an PDFT analyzer engine for analytics including signal detection and classification, event detection and environment monitoring, mask creation, and other analyzer services. FIG.49illustrates data analytics in an analyzer engine according to one embodiment of the present invention. Classical RF techniques and new RF techniques are combined to perform data analytics including environment monitoring and signal classification. Classical RF techniques are based on known signals and initial parameters including demodulation parameters, prior knowledge parameters, and user provided parameters. New RF techniques use machine learning to learn signal detection parameters and signal properties to update detection parameters for signal classification. New signals are found and used to update learned signal detection parameters and taught signal properties based on supervised and unsupervised machine learning. In one embodiment, the automatic signal detection process includes mask creation and environment analysis using masks. Mask creation is a process of elaborating a representation of an RF environment by analyzing a spectrum of signals over a certain period of time. A desired frequency range is entered by a user to create a mask, and FFT streaming data is also used in the mask creation process. A first derivative is calculated and used for identifying maximum power values. A moving average value is created as FFT data is received during a time period selected by the user for mask creation. For example, the time period is 10 seconds. The result is an FFT array with an average of maximum power values, which is called a mask.FIG.50illustrates a mask according to one embodiment of the present invention. In one embodiment, the mask is used for environment analysis. In one embodiment, the mask is used for identifying potential unwanted signals in an RF environment. Each mask has an analysis time. During its analysis time, a mask is scanned and live FFT streaming data is compared against the mask before next mask arrives. If a value is detected over the mask range, a trigger analysis is performed. Each mask has a set of trigger conditions, and an alarm is triggered into the system if the trigger conditions are met. In one embodiment, there are three main trigger conditions including alarm duration, dB offset, and count. The alarm duration is a time window an alarm needs to appear to be considered as one. For example, the time window is 2 seconds. If a signal is seen for 2 seconds, it passes to the next condition. The dB offset is the dB value a signal needs to be above the mask to be considered as a potential alarm. The count is the number of times the first two conditions need to happen before an alarm is triggered into the system. FIG.51illustrates a workflow of automatic signal detection according to one embodiment of the present invention. A mask definition is specified by a user for an automatic signal detection process including creating masks, saving masks, and performing environment analysis based on the masks created and FFT data stream from a radio server. If trigger conditions are met, alarms are triggered and stored to a local database for visualization. FIG.52is a screenshot illustrating alarm visualization via a graphical user interface (GUI) according to one embodiment of the present invention. In the GUI, current alarms, acknowledged alarms, and dismissed alarms in a certain RF environment are listed with information including types, counts, durations, carrier frequencies, technologies, and band allocations. In one embodiment, a detection algorithm is used for alarm triggering. The detection algorithm detects power values over the mask considering the dB offset condition, but does not trigger an alarm yet.FIG.53illustrates a comparison of live FFT stream data and a mask considering a dB offset according to one embodiment of the present invention. The dB offset is 5 dB, so the detection algorithm only identifies power values that are at least 5 dB higher than the mask. The detection algorithm then identifies peaks for power values above the mask after considering the dB offset. In embodiment of the present invention, a flag is used for identifying peak values. A flag is a Boolean value used for indicating a binary choice.FIG.54is a snippet of the code of the detection algorithm defining a flag according to one embodiment of the embodiment. If the flag is TRUE, the detection algorithm keeps looking for peak values. A forEach function analyzes each value to find the next peak. Once reaching a peak value, it goes down to the value nearest to the mask, and the flag is set to FALSE.FIG.55is a snippet of the code of the detection algorithm identifying peak values according to one embodiment of the present invention. In one embodiment, live FFT stream data has multiple peaks before falling under the mask.FIG.56illustrates a complex spectrum situation according to one embodiment of the present invention. Live FFT stream data in two alarm durations have multiple peaks before falling under the mask.FIG.57is an analysis of the live FFT stream data above the mask in the first alarm duration inFIG.56according to one embodiment of the present invention. A first peak is identified, and the power value starts to decrease. A first value nearest to the mask after the first peak is identified, the flag is still TRUE after comparing the first value nearest to the mask and mask, so the detection algorithm keeps looking for peaks. Then, a second peak is identified, and the power value starts to decrease. A second value nearest to the mask after the second peak is identified. The second value is greater than the first value, the flag is still TRUE, so the detection algorithm keeps looking for peak values. Then a third peak value is identified and a third value nearest to the mask is also identified. The third value is on the mask considering the offset value, and the flag is set to FALSE. By comparison, the third peak value is considered as the real peak value for the power values above the mask in the first alarm duration ofFIG.56. Once all the peaks are found, the detection algorithm checks the alarm duration, which is a time window where a signal needs to be seen in order to be considered for alarm triggering. The first time that the detection algorithm sees the peak, it saves the time in memory. If the signal is still present during the time window, or appears and disappears during that time, the detection algorithm is to consider triggering an alarm. If the condition is not met, a real-time alarm is not sent to a user, however the detected sequence is recorded for future analysis.FIG.58is a snippet of the code of the detection algorithm checking the alarm duration according to one embodiment of the present invention. If both the dB offset condition and the alarm duration condition are met, the detection algorithm analyzes the count condition. If the amount of times specified in the count condition is met, the detection algorithm triggers the alarm. In one embodiment, all alarms are returned as a JSON array, and a forEach function creates the structure and triggers the alarm.FIG.59is a snippet of the code of the detection algorithm triggering an alarm according to one embodiment of the present invention. The present invention provides spectrum monitoring and management, spectrum utilization improvements, critical asset protection/physical security, interference detection and identification, real time situational awareness, drone threat management, and signal intelligence (SigINT). Advantageously, the automatic signal detection in the present invention provides automated and real-time processing, environmental learning, autonomous alarming and operations (e.g., direction finding, demodulation), wideband detection, etc. The automatic signal detection in the present invention is of high speed and high resolution with low backhaul requirements, and can work in both portal and fixed modes with cell and land mobile radio (LMR) demodulation capability. The automatic signal detection system in the present invention is operable to integrate with third party architecture, and can be configured with distributed architecture and remote management. In one embodiment, the automatic signal detection of the present invention is integrable with any radio server including any radio and software defined radio, for example, Ettus SDR radio products. Specifically, spectrum solutions provided by the automatic signal detection technology in the present invention have the following advantages: task automation, edge processing, high-level modular architecture, and wideband analysis. Task automation simplifies the work effort required to perform the following tasks, including receiver configuration, process flow and orchestration, trigger and alarm management, autonomous identification of conflicts and anomalous signal detection, automated analytics and reporting, system health management (e.g., system issues/recovery, software update, etc.).FIG.60is a screenshot illustrating a job manager screen according to one embodiment of the present invention.FIG.61illustrates trigger and alarm management according to one embodiment of the present invention. Task automation enables an operator to send a job to one or multiple systems distributed across a geography. Each job contains a pre-built, editable manifest, which can configure receivers and outline alarm conditions with appropriate actions to execute. As an example, for a baseline analysis task, the system automatically scans multiple blocks of spectrum in UHF, VHF, Telco bands and ISM bands such as 2.4 GHz and 5.8 GHz, stores multiple derivatives regarding signal and noise floor activity, produces an automated report showing activity and occupancy over a specified time, analyzes signal activity to correctly channelize activity by center frequency and bandwidth, and combines customer supplied or nationally available databases with data collected to add context (e.g., license, utilization, etc.). The baseline analysis task provides an operator with a view into a spectral environment regarding utilization and occupancy. This can be of assistance when multiple entities (local, state and federal agencies) have coverage during a critical event and need to coordinate frequencies. Multiple radios along with multiple systems across a geography can be commanded to begin gathering data in the appropriate frequency bands. Resolution bandwidth and attenuation levels are adjustable, coordination is made simple, and actionable information is returned without significant manual effort. The systems provided in the present invention is operable to process RF data and perform data manipulation directly at the sensor level. All data can be pushed to a server, but by processing the data first at the sensor, much like in IoT applications, more can be done with less. Overall, edge processing makes information more actionable and reduces cost. The systems of the present invention also leverage machine learning to drive automation at the edge to a higher level, which makes solutions provided by the present invention more intuitive, with greater capability than other remote spectrum monitoring solutions. Edge processing also reduces the bandwidth requirements for the network by distilling data prior to transfer. A reduction in storage requirements, both on the physical system and for a data pipe, enables more deployment options and strategies. For example, different deployment options and strategies include vehicle mounted (e.g., bus or UPS trucks mapping a geography with cellular backhaul), transportable (e.g., placed in a tower on a limited basis) where ethernet is not available, and man-portable (e.g., interactive unit connected to other mobile or fixed units for comparative analysis). Core capabilities processed on the node at the edge of the network include spectrum reconnaissance, spectrum surveillance with tip and cue, and signal characterization. Spectrum reconnaissance includes automatic capture and production of detail regarding spectrum usage over frequency, geography and time. More actionable information is provided with edge processing, distributed architecture and intelligent data storage. Spectrum surveillance includes automated deconfliction over widebands by comparing real-time data to user supplied, regional and learned data sets and producing alarms. Nodes can also work with third party systems, such as cameras, making them smarter. Signal characterization provides actionable information. signals of interest are decoded and demodulated by the system, with location approximation or direction, to improve situational intelligence. In one embodiment, edge processing of the present invention includes four steps. At step one, first and second derivative FFT analysis is performed in near real time, providing noise floor estimates and signal activity tracking.FIG.62is a screenshot illustrating a spectrum with RF signals and related analysis.FIG.63is a screenshot illustrating identified signals based on the analysis inFIG.62. Spectrum in the shaded areas inFIG.63are identified as signals. At step two, analysis is aggregated, signal bandwidths and overall structure are defined, and data is stored to create baselines and be used in reporting. At step three, incoming FFT is compared to existing baselines to find potential conflicts to the baseline. When conflicts are detected, parameters are sent to an event manager (e.g., a logic engine). At step four, the event manager utilizes user supplied knowledge, publicly available data, job manifests and learned information to decide appropriate actions. Action requests such as creating an alarm, sending an e-mail, storing I/Q data, or performing DF are sent to a controller. A modular approach to system design and distributed computing allows for proper resource management and control when enabled by the right system control solution, which maximizes performance while keeping per-unit cost down. A loosely coupled solution architecture also allows for less costly improvements to the overall network. Parallel processing also enables multiple loosely coupled systems to operate simultaneously without inhibiting each other's independent activities.FIG.64is a diagram of a modular architecture according to one embodiment of the present invention. A modular design enables different components to be integrated and updated easily, without the need for costly customization or the never-ending purchase of new equipment, and makes it easier to add in additional hardware/software modules. Compared to the industry standard tightly coupled architectures increasing complexity and reducing scalability, reliability and security over time, the loosely coupled modular approach provides standardization, consolidation, scalability and governance while reducing cost of operation. The spectrum monitoring solutions provided in the present invention significantly enhance situational intelligence and physical security, reduces utility complexity and project risk. The spectrum management systems provided in the present invention are operable to detect and report on incidents in near real time. Remote sensors are placed at site with the capability of capturing and processing RF activity from 40 MHz to 6 GHz. Highly accurate baselines are constructed for automated comparison and conflict detection. Systems are connected to a centralized monitoring and management system, providing alarms with details to a network operations center. On-site systems can also provide messages to additional security systems on-site, such as cameras, to turn them to the appropriate azimuths. In one embodiment, information such as the presence of a transmission system can be used in an unmanned vehicle recognition system (UVRS) to detect the presence of an unmanned vehicle. The unmanned vehicle can be air-borne, land-based, water-borne, and/or submerged. The detection of certain modulation schemes can be used to identify the presence of mobile phones or mobile radios. This information, coupled with direction finding, provides situational intelligence for informed decision making and rapid response. Measurements and signal intelligence regarding an RF spectrum assist in reducing the risk of financial losses due to theft, vandalism, and power disruptions, providing additional safety for employees and visitors, making other security technologies, such as thermal cameras and IP videos smarter by working in tandem to identify and locate the presence of threats, and capturing and storing I/Q data, which can be utilized as evidence for legal proceedings. Wireless devices can be utilized across multiple bands. While other monitoring systems are limited on bandwidth (i.e., limited focus) or resolution (making it difficult to see narrowband signals), the systems in the present invention are designed to be more flexible and adaptable and capable of surveying the entire communications environments looking for illicit activity.FIG.65illustrates a communications environment according to one embodiment of the present invention. In one embodiment, a signal characterization engine is configured to provide information including location information and direction, operator name, drone transmission type, and MAC address. All these are actionable information enabling swift resolution.FIG.66illustrates an UVRS interface with positive detections, according to one embodiment of the present invention.FIG.67lists signal strength measurements according to one embodiment of the present invention. In one embodiment, the systems of the present invention can be used for mitigating drone threats, identifying and locating jammers, and ensuring communications. The systems of the present invention are designed to identify illicit activity involving use of the electromagnetic spectrum such as drone threats, directed energy/anti-radiation weapons aimed at degrading combat capability (e.g., jammers). The systems of the present invention also bring structure to largely unstructured spectral data enabling clearer communications (interference reduction) and efficient communication mission planning. Jammers are becoming more prevalent and can be deployed on-site or off premises, making them very difficult to locate. The solutions provided by the present invention automatically send alerts as to the presence of wideband jammers interfering with critical parts of the communications spectrum, and assist in the location of focused jammers which can be very difficult to find. The ability to proactively and rapidly locate jamming devices reduces disruptions in communications, and improves overall security and limits the potential for financial loss.FIG.68illustrates a focused jammer in a mobile application according to one embodiment of the present invention.FIG.69illustrates a swept RF interference by a jammer according to one embodiment of the present invention. To maintain security and coordinate operations, consistent and quality communications are imperative. The systems provided in the present invention have multiple deployment strategies and data can be collected and distilled into strength and quality metrics. The data is easy to access in reports.FIG.70illustrates data collection, distillation and reporting according to one embodiment of the present invention. The systems provided in the present invention have the capability of building baselines, detecting when signals exist which are not common for the environment, and creating alerts and automatically starting processes such as direction finding. The systems provided in the present invention can be used for countering unmanned vehicles, including but not limited to unmanned aerial systems (UASs), land-based vehicles, water-borne vehicles, and submerged vehicles.FIG.71is a comparison of multiple methodologies for detecting and classifying UAS. Of the methods listed inFIG.72, RF detection provides the highest level of accuracy in classifying an object as a UAS. An RF-based counter-UAS system comprises multiple receivers in a single platform. In one embodiment, there are four receivers. Each receiver is operable to scan multiple bands of spectrum looking for UAS signatures. For example, the multiple bands of spectrum include 433 MHz, 900 MHz, 2.4 GHz, 3.5 GHz, and 5.8 GHz Base. Each receiver has the capability of scanning a spectrum from 40 MHz to 6 GHz. The receivers are capable of working in tandem for DF applications. Multiple RF-based counter-UAS systems can communicate with each other to extend range of detection and enhance location finding accuracy. The RF-based counter-UAS systems of the present invention comprise proprietary intelligence algorithm on one or multiple GPUs with execution time less than 10 ms.FIG.72lists capabilities of an RF-based counter-UAS system according to one embodiment of the present invention. The capabilities of an RF-based counter-UAS system include detection, classification, direction finding, and message creation. In one embodiment, an RF-based counter-UAS systems can be deployed as a long-distance detection model as illustrated inFIG.73. Four omni-directional antennas are used to create an array for detection and direction finding. In one embodiment, Gimbal-mounted (rotating) defeat antenna can be added. The long-distance detection model is simple to install. In one embodiment, extremely long-distance detection can be obtained with arrays utilizing masts with a height of 8 to 10 meters. FIG.74illustrates features of drones in the OcuSync family.FIG.75illustrates features of drones in the Lightbridge family. The long ranges, adaptability, and ubiquity of OcuSync and Lightbridge systems make them potentially very dangerous. The RF-based counter-UAS systems in the present invention are operable to detect and defeat UASs using these systems. The RF-based counter-UAS systems in the present invention are operable to detect UASs over a distance of 1.5 kilometers with direction. UASs can be detected and categorized faster than other systems. The RF-based counter-UAS systems can easily be integrated into third party systems (e.g., RADAR and camera systems), or act as the common operating platform for other systems for command and control. The RF-based counter-UAS systems are capable for wideband detection from 70 MHz to 6 GHz, enabling detection of UASs at 433 MHz, 900 MHz, 2.4 GHz, 3.5 GHz, and 5.8 GHz. The RF-based counter-UAS systems are capable of detecting and direction finding UAS controllers. In one embodiment, unknown and anomalous signals can be categorized as UAS. In one embodiment, the RF-based counter-UAS systems in the present invention can be used for detecting other unmanned vehicles such as land-based, water-borne, or submerged unmanned vehicles in addition to detecting unmanned aerial vehicles. In one embodiment, the present invention provides an autonomous and intelligent spectrum monitoring system capable of detecting the presence of wireless activity across extremely wide bands, capturing and performing analysis on highly intermittent signals with short durations automatically, and converting RF data from diverse wireless mobile communication services (e.g., cellular, 2-way, trunked) into knowledge. The autonomous and intelligent spectrum monitoring system of the present invention are advantageous with edge processing, modular architecture, job automation, and distributed sensor network. Edge processing enables the delivery of a truly autonomous sensor for automated signal recognition and classification and near real-time alarming24/7, equipped with machine learning algorithms. A modular architecture increases speed and efficiency, enables more bandwidth to be analyzed (with superior resolution), reduces latency and network traffic (i.e., low backhaul requirements). Logic engines produce relevant alarms, thus limiting false positives. Job automation allows hardware solutions to be customized to meet operational needs with inclusion of additional receivers and GPUs, cloud or client hosted backend, and third-party integration. A distributed sensor network supports feature specific applications such as direction finding and drone threat management, capable of LMR and cellular demodulation and assisting prosecution efforts with data storage. The spectrum monitoring system of the present invention represents a paradigm shift in spectrum management. Edge processing migrates away from the inefficiencies of manual analysis, or the time delays of backhauling large data sets. The spectrum monitoring system of the present invention performs real-time, automated processing at the device level, providing knowledge faster, reducing network traffic and improving application performance with less latency. Modular architecture makes additional development, integration of new features and the incorporation of third party systems easy, and also future-proof capital expenditure. Job automation simplifies operations (e.g., data collection, setting triggers) by enabling the execution of multiple complex tasks, with one click on a user interface. Distributed sensors provide security to critical assets spread across large geographies, linked to a network operations center. Data can be shared to perform location finding and motion tracking. For critical assets, only certain types of transmitting devices (e.g., radios, phones, sensors) should be present on specified frequencies. The spectrum monitoring system of the present invention learns what is common for a communications environment and creates alarms when an anomalous signal is detected in close proximity. Alerts, along with details such as signal type (e.g., LMR, Mobile, Wi-Fi) and unique characteristics (e.g., radio ID) are posted to a remote interface for further investigation. The spectrum monitoring system of the present invention which is capable of learning, analyzing and creating alarms autonomously provides a heightened level of security for critical assets and infrastructure.FIG.76illustrates a spectrum monitoring system detecting an anomalous signal in close proximity of critical infrastructure. The spectrum monitoring system derives intelligence by collecting, processing, and analyzing spectral environments in near real time. The unique characteristics and signatures of each transmitter are compared automatically to either user supplied or historical data sets. Potential threats are identified quickly and proactively, reducing acts of vandalism, theft and destruction. Advantageously, the spectrum monitoring system of the present invention reduces the risk of financial losses due to theft, vandalism, and power disruptions, provides additional safety for employees and visitors, makes other security technologies including thermal cameras and IP video smarter by working in tandem to identify and locate the presence of threats (with DF functionality), and captures and stores data, which can be utilized as evidence for legal proceedings. Node devices in the spectrum monitoring system of the present invention can be deployed across large geographies. The spectrum monitoring system is built to interact with third party systems including cameras and big data platforms, providing additional intelligence. All these systems send pre-processed data to a cloud platform and are visualized efficiently on a single interface.FIG.77illustrates a system configuration and interface according to one embodiment of the present invention. Alarms generated at the site are sent to a remote interface, enabling perimeters to be monitored 24/7 from anywhere. Alarm details including transmitter type (e.g., mobile phone), unique identifiers (e.g., radio ID), UAV type, and directions are presented on the interface. Job automation restructures work flow and the need for configuration management, greatly reducing manual efforts regarding receiver configuration, trigger and alarm management, analytics and reporting, system health management, and conflict and anomalous signal detection. Not all activity observed in a spectral environment represents a threat. Even in remote locations, LMR radios can be observed. Pedestrians may also be in the area utilizing mobile devices. The spectrum monitoring system of the present invention is equipped with logic to determine the typical makeup of an environment (e.g., common signals based on time of day), proximity, and duration (e.g., time on site). The logic limits false positives to produce alarms that are meaningful. Parameters can be adjusted as required. FIG.78is a screenshot illustrating no alarm going off for an anomalous signal from LMR traffic not in proximity of the site according to one embodiment of the present invention. The signal at 467.5617 MHz and −73.13 dBm does not cause an alarm to go off. In one embodiment, the spectrum monitoring system of the present invention enables 24/7 scanning of a local environment, identification of new activities (e.g., LMR, cellular, Wi-Fi), threat assessment capability (e.g., proximity and duration analysis), and alarm creation with details sent via email and posted to a user interface. In one embodiment, the spectrum monitoring system of the present invention supports a powerful user interface simplifying remote monitoring, greatly improves receiver sensitivity and processing enabling identification of intermittent signals with millisecond durations (e.g., registration events, WhatsApp messaging, background applications), and provides an enhanced logic engine which is operable to identify both signals with long durations (e.g., voice calls, video streaming, data sessions) and repetitive short bursts (e.g., Facebook updates). In one embodiment, the spectrum monitoring system of the present invention is capable of mobile phone identification from 800-2600 MHz (covering all mobile activity at site), recognition of intermittent and bursting signals associated with cellular applications, identification of LMR, Wi-Fi, and UAV activity, and determining proximity and limiting false alarms with logic engines. Node devices in a spectrum monitoring system of the present invention are operable to produce data sets tagged with geographical node location and time. The data sets can be stored on the node devices, or fed to a cloud-based analytics system for historical trend analysis, prediction models, and customer driven deep learning analytics. Analytics provided by the spectrum monitoring system of the present invention can be used to identify the presence of constant or periodic signals. For example, recognition of the presence of wireless cameras can indicate potential surveillance of a critical asset site. Also for example, the presence of constant or periodic signals can indicate existence of organized groups, attempting to determine normal access patterns for the purpose of espionage or theft. Analytics provided by the spectrum monitoring system of the present invention can also be used to review patterns before and during an intrusion at several sites and predict next targeted sites. Analytics provided by the spectrum monitoring system of the present invention can also be used to track contractor and employee visits, both planned and unplanned to the site, to augment data for work flow improvements. FIG.79illustrates a GUI of a remote alarm manager according to one embodiment of the present invention. FIG.80labels different parts of a front panel of a spectrum monitoring device according to one embodiment of the present invention. FIG.81lists all the labels inFIG.79representing different part of the front panel of the spectrum monitoring device according to one embodiment of the present invention. FIG.82illustrates a spectrum monitoring device scanning a spectrum from 40 MHz to 6 GHz according to one embodiment of the present invention. FIG.83lists the capabilities of a spectrum monitoring system according to 5 main on-network mobile phone states plus 1 no-network mobile phone state. A mobile phone in the first main state is active on network, and activities also include short-duration (e.g., milliseconds) activities (e.g., text messages, WhatsApp messages and registration events) besides completing a voice call, engaging in a data session, and streaming video. The first main state lasts 6 to 8 hours typically. Receiver sensitivity for speed and bandwidth and processing are enhanced to enable the capability of intercepting these activities and producing an alarm by the spectrum monitoring system of the present invention. In the second main state, there are background applications running. To conserve battery life, a mobile phone does not constantly monitor the network, but does “wake up” and check for messages (e.g., every 10 seconds). The mobile phone checks applications including Facebook, SMS, voicemail, email, Twitter, and game challenge notifications. A typical phone sends an update notice (e.g., a request to pull down emails, Facebook messages, etc.) every 90 seconds on average. Background applications such as social media updates are extremely short in duration. To capture these events, receivers in the spectrum monitoring system are doubled (e.g., 2 to 4), the bandwidth of each receiver is doubled (e.g., 40 MHz to 80 MHz), and software is developed to enhance the system to process the increase in sample (e.g., 10×). FIG.84illustrates a mobile event analysis per one minute intervals according to one embodiment of the present invention. Events on a mobile phone include background apps (e.g., Facebook, Email, location services, sync apps) with a probability of 90%, active apps (e.g., mobile search, gaming) with a probability of 30%, messaging (e.g., SMS, WhatsApp, Snapchat) with a probability of 15%, voice calls with a probability of 10%. The combined probability gets to 95%. FIG.85is a site cellular survey result according to one embodiment of the present invention. The site cellular survey result reveals there is not active GSM network on site, which means the vast majority of the mobile phones need to be UMTS and LTE capable to have service. The foregoing method descriptions and the process flow diagrams are provided merely as illustrative examples and are not intended to require or imply that the steps of the various embodiments must be performed in the order presented. As will be appreciated by one of skill in the art the order of steps in the foregoing embodiments may be performed in any order. Words such as “thereafter,” “then,” “next,” etc. are not intended to limit the order of the steps; these words are simply used to guide the reader through the description of the methods. Further, any reference to claim elements in the singular, for example, using the articles “a,” “an” or “the” is not to be construed as limiting the element to the singular. The various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention. The hardware used to implement the various illustrative logics, logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but, in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Alternatively, some steps or methods may be performed by circuitry that is specific to a given function. In one or more exemplary aspects, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a non-transitory computer-readable medium or non-transitory processor-readable medium. The steps of a method or algorithm disclosed herein may be embodied in a processor-executable software module which may reside on a non-transitory computer-readable or processor-readable storage medium. Non-transitory computer-readable or processor-readable storage media may be any storage media that may be accessed by a computer or a processor. By way of example but not limitation, such non-transitory computer-readable or processor-readable media may include RAM, ROM, EEPROM, FLASH memory, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of non-transitory computer-readable and processor-readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and/or instructions on a non-transitory processor-readable medium and/or computer-readable medium, which may be incorporated into a computer program product. Certain modifications and improvements will occur to those skilled in the art upon a reading of the foregoing description. The above-mentioned examples are provided to serve the purpose of clarifying the aspects of the invention and it will be apparent to one skilled in the art that they do not serve to limit the scope of the invention. All modifications and improvements have been deleted herein for the sake of conciseness and readability but are properly within the scope of the present invention.
209,846
11860210
DETAILED DESCRIPTION Systems and methods for identifying a phase connected to electricity meters are disclosed. Electricity is generated in three phases, A, B, and C, and on each phase, voltage oscillates in a sine wave, for example, at 60 Hz. Each of three phases of electricity is transmitted on a separate power line and there may be a fourth line, N, a ground or neutral wire with no voltage on it. These lines interact with each other at transformers, or where power is consumed. FIG.1illustrates an example power distribution environment100. In this example, a power plant102generates electricity, which is carried by high voltage lines104to a power substation106. The power substation106provides electricity via a feeder108to a transformer110. The feeder108is a power line consisting of individual powered lines with phase A, B, and C servicing a plurality of premises connected via the transformer110and electricity meters112A,112B, and112C providing electricity to associated premises114A,114B, and114C. FIG.2illustrates a schematic diagram of an example transformer200, such as a distribution transformer. Transformers are used to adjust voltage, and can be wired between a powered line and the neutral line, in which case, an output phase corresponds to the phase of the powered line. The transformers can also be wired between two powered lines, in which case, an output phase differs from all three powered lines and may be referred as a hybrid phase. Therefore, there are six possible phases at the metering level: Phase A-N(or A), Phase B-N (or B), Phase C-N(or C), Phase A-B, Phase B-C, and Phase A-C. In this example, Phases A, B, and C are shown as on lines202,204, and206, respectively. In this example, there are 2400 V between Phases A202and B204and between Phases B204and C206, and a first connection208of a primary winding210of the transformer200is connected to Phase C206and a second connection212of the primary winding210is connected to Phase B204. A secondary winding214has three outputs, a first output216, a second output, or a center tap,218, and a third output220, which are connected to a line-a222, a neutral line224, and a line-b226, respectively. The transformer200in this example is a step-down transformer that reduces the voltage of the powered lines, in this case, Phases B204and C206, from 2400 V to 120 V between the line-a222and the neutral224and between the line-b226and the neutral224, and to 240 V between the line-a222and the line-b226. FIG.3illustrates an example set of voltage graphs300for various phases. In this example, Phase A302is set as a reference, Phase B304is 120° ahead of Phase A302, and Phase C306is 240° ahead of Phase A302. Each phase has a voltage of 120 VRMSand a frequency of 60 Hz. Phase A-B308, Phase B-C310, and Phase C-A312are also shown. As shown, depending on the phase, the voltage behaves differently in magnitude over time. Poor phase balancing, such as overloading one phase, overloading equipment connected to a phase, or connecting to an incorrect phase, may cause operational inefficiency and equipment overheating, for example, increase in early equipment failure, delays in power outage response/management, and safety hazard. FIGS.4A and4Billustrate an example process400for identifying a phase at an electricity meter level. At block402, voltage time series data collected from every electricity meter on a feeder is entered. A feeder is a power line consisting of individual powered lines with phase A, B, and C servicing a plurality of premises connected via electricity meters. The distinct powered lines are presumed to experience different fluctuations in RMS voltage as a result of differing loads. Those fluctuations are expected be seen by all electricity meters connected to that line, and voltage readings on the same phase of the feeder are expected to be highly correlated compared to voltage readings on other phases. Accordingly, voltage readings collected from each electricity meter on the feeder over a preselected collection time period, such as from Jan. 1, 2020 to Dec. 31, 2020, may be entered as the voltage time series data. The voltage readings may be taken at a preselected interval, such as every five minutes with accuracy of ±0.15 V. With smart electricity meters in advanced metering infrastructure (AMI) having automated meter reading (AMR), the voltage time series data may be automatically transmitted from each electricity meter to, and collected by, a central office of the utility service provider or a third party. Additionally, an existing meter-phase connectivity record, which is the current record of information regarding each meter's connection to phase connections, may also be entered. As discussed above, the existing meter-phase connectivity record may not be up to date due to, for example, when linemen move a customer's electricity meter from one phase to another to better balance the load, but fail to record their actions and update the phase information of the customer's meter. At block404, the voltage time series data of each electricity meter for a preselected analysis period of the preselected collection time period, such as each month over Jan. 1, 2020 to Dec. 31, 2020, is filtered to omit problematic data or electricity meter. For example, expected average voltages (RMS) may be 120 V, 208 V, 240 V, 277 V, and 480 V for the feeder, then values that are more than ±5% out of the expected average voltages may be omitted. Frozen periods, identified as extended periods of time with constant voltage on a given meter, may be omitted. Jump outliers, identified as large interval-to-interval voltage changes outside of a preselected threshold, may be omitted. Electricity meters with insufficient amount of data over the collection time period may be omitted. Electricity meters having location information inconsistent with actual geographical locations of the electricity meters may be omitted, or the location information may be corrected and the voltage time series data of those electricity meters with the corrected location information may be used. At block406, voltage correlation of every meter-to-meter combination is calculated. In one example, the voltage correlation may be calculated using Pearson correlation coefficient (PCC) to determine the correlation between voltage at meter A and voltage at meter B, that is, how a change in voltage at meter A affects a change in voltage at meter B. Pearson correlation coefficient, ρ, has a value between −1 and 1, and is given by, for the correlation between X and Y: ρX,Y=cov⁢(X,Y)σX⁢σY, where:cov is the covariance,σXis the standard deviation of X, andσYis the standard deviation of Y.PCC may be calculated for every meter-to-meter pairing, and the results may be stored in a matrix. At block408, three initial kernels, K1containing most of the electricity meters for Phases A, B, and C, is determined. For the process of block408, an agglomerative cluster loop or method may be utilized to determine the three initial kernels. Examples of the agglomerative cluster method include analyses based on a single-linkage distance, Ward linkage distance, dendrogram step-through, and the like. Additionally, or alternatively, a Gaussian mixture model may be utilized to perform the clustering. At block410, a median first order difference voltage for each preselected interval is determined for each of the initial kernels. Correlation, PCC1K1, PCC2K1, and PCC3K1, with each of the three initial kernel for each meter are calculated at block412. At block414, a hybrid index for the three initial kernels may be calculated based on a median of the correlations, PCC1K1, PCC2K1, and PCC3K1. The hybrid index may be defined as the ratio of the second highest (median) correlation to the highest correlation, Hybrid⁢IndexK⁢1=median⁡(PCC1K⁢1,PCC2K⁢1,PCC3K⁢1)max⁡(PCC1K⁢1,PCC2K⁢1,PCC3K⁢1). Alternatively, the hybrid index may also be defined, or calculated as: alt_Hybrid⁢IndexK⁢1={[-P⁢C⁢C1K⁢1+P⁢C⁢C2K⁢1]22+[P⁢C⁢C1K⁢1+P⁢C⁢C2K⁢1-2*P⁢C⁢C3K⁢1]26}1/2. The hybrid index is used to separate out the line-to-line connections from the line-to-neutral connections as described later in more detail. Based on the Hybrid IndexK1, new kernels for each phase, K2, are determined at block416. The correlation between each of the electricity meters and the new kernels, PCC1K1, PCC2K2, and PCC3K2, are calculated at block418, and Hybrid IndexK2is calculated at block420. At block422, for each preselected analysis period, average correlation with each phase for the new kernels, K2, mean(PCC1K2), mean(PCC2K2), and mean(PCC3K2), and average hybrid index for K2, mean(Hybrid Index K2), are calculated. The electricity meters are then clustered into three groups based on the average hybrid index for K2, mean(Hybrid IndexK2) at block424. The three groups include a group with a high hybrid index, which is considered to be the line-to-line phase group, a group with low hybrid index, which is considered to be line-to-neutral phase group, and a group with in-between hybrid index values is used as a band separating the high and low hybrid index groups. At block426, the electricity meters of the high hybrid index group, X, are grouped into three line-to-line phases, A-B, B-C, and C-A, based on the average correlation, mean(PCC1K2), mean(PCC2K2), and mean(PCC3K2). For the clustering processes of blocks424and426, the agglomerative cluster method as described above may be utilized. At block428, the electricity meters of the low hybrid index group, Y, are grouped into three line-to-neutral phases, A, B, and C, based on the phase having the highest average correlation, mean(PCC1K2), mean(PCC2K2), and mean(PCC3K2) with the meter. The three line-to-line groups of electricity meters and the three line-to-neutral groups of electricity meters are combined as new kernels, K3, having six phases, A, B, C, A-B, B-C, and C-A, at block430. At block432, the filtered data from block404is used to calculate correlation of each electricity meter with each of the six kernels of K3, PCC1K3, PCC2K3, PCC3K3, PCC4K3, PCC5K3, and PCC6K3, and hybrid index based on the correlation with line-to-neutral kernels, PCC1K3, PCC2K3, and PCC3K3, is calculated at block434. At block436, average correlation with each of six phases are calculated as mean(PCCiK3), for i=1, 2, 3, 4, 5, 6, where i represents each of the six phases, A, B, C, A-B, B-C, and C-A. At block438, an average hybrid index, mean(Hybrid IndexK3) is calculated. The electricity meters are grouped into two groups, a line-to-line group and a line-to-neutral group at block440. The agglomerative cluster method described above may be utilized to group electricity meters with a high average hybrid index into the line-to-line group and electricity meters with a low average hybrid index into the line-to-neutral group. A predicted phase is assigned to each meter based on the highest correlation at block442. For the line-to-line group, the predicted phase is the one with a highest correlation in mean(PCCiK3), for i=4, 5, 6, and for the line-to-neutral group, the predicted phase is the one with a highest correlation in mean(PCCiK3), for i=1, 2, 3. The predicted phase may then be output for comparison with the existing meter-phase connectivity record. FIG.5illustrates an example detail process of block408ofFIG.4. At block502, three largest clusters of electricity meters are determined. For all possible number of clusters from three to the number of meters in the sample, the largest three clusters, from large to small, L1, L2, and L3are determined. At block504, a ratio of the third largest cluster size to the largest cluster size, R1⁢to⁢3=sizeL⁢3sizeL⁢1, is calculated. At block506, the lowest possible number of clusters, min NClusters, such that R1to3is greater than a preselected criteria, is determined, which ensures that the three initial kernels obtained are not too imbalanced. For example, for the preselected criteria of 0.5, the largest cluster is no larger than twice the size of the smallest cluster. At block508, the agglomerative cluster method may be utilized to group the electricity meters into the min NClusterscalculated in block506. At block510, the three largest clusters are selected as the three initial kernels, and the process proceeds to block410. FIG.6illustrates an example detail process of block416ofFIG.4. At block602, a predetermined range, for example from 0.75 to 0.85, of Hybrid IndexK1is evaluated in a predetermined increment, for example, 0.01, and a cutoff value of Hybrid IndexK1is determined at block604. The cutoff value of Hybrid IndexK1may be defined as a value of Hybrid IndexK1below which there exist a first sufficient number of electricity meters for each phase and, above which there exist a second sufficient number of electricity meters, where the first and second sufficient numbers may be preselected. At block606, electricity meters with Hybrid IndexK1value lower than the cutoff value are selected as the elements for three new kernels, K2, and median of each phase is calculated and defined as three new kernels, K2, at block608. The process then proceeds to block418. FIG.7Aillustrates an example display700of the clusters of electricity meters. Clusters of electricity meters may be displayed when each meter is plotted in 3D coordinates based on its correlation to phases A, B and C. The display700is a 2D view of the 3D plot viewed from the point (1,1,1) facing the origin (0,0,0) as shown by a graphical representation702.FIG.7Billustrates an example display704of phases of the electricity meters ofFIG.7Aplotted over the locations of the electricity meters on a map. FIG.8illustrates an example block diagram of a system800for identifying electrical phase. The system800may comprise one or more processors (processors)802communicatively coupled to memory804. The processors802may include one or more central processing units (CPUs), graphics processing units (GPUs), both CPUs and GPUs, or other processing units or components known in the art. The processors802may execute computer-executable instructions stored in the memory804to perform functions or operations, with one or more of components communicatively coupled to the one or more processors802and the memory804, as described above with reference toFIGS.4-7. For example, the memory804may store a phase analysis application806that is executed for analyzing the phases as described above with reference toFIGS.4-7. Depending on the exact configuration of the system800, the memory804may be volatile, such as RAM, non-volatile, such as ROM, flash memory, miniature hard drive, memory card, and the like, or some combination thereof. The memory804may store computer-executable instructions that are executable by the processors802. The components of the system800coupled to the processors802and the memory804may comprise a user interface (UI)806, including a display808, and a communication module810. The communication module810may communicate with a plurality of electricity meters812to receive the voltage time series data collected as discussed above with reference toFIGS.4-6, as indicated by an arrow814. Additionally, or alternatively, the electricity meters812may communicate with a central office816of the utility provider, or a third party, as shown by an arrow818, and the central office816may collect the voltage time series data. The central office816may communicate the collected voltage time series data to the communication module810as shown by an arrow820. While the communications814,818, and820between the communication module810and the electricity meters812, the electricity meters812and the central office816, and the central office816and the communication module810, respectively, are shown as wireless communications, the communications814,818, and820may be established in various ways, such as via a cellular network, Wi-Fi network, cable network, landline telephone network, and the like. While not shown, each of the electricity meters812may comprise one or more processors, memory coupled to the processors, a metrology module coupled to the processors, and a communication module coupled to the processors. The processors may include one or more central processing units (CPUs), graphics processing units (GPUs), both CPUs and GPUs, or other processing units or components known in the art. The processors may execute computer-executable instructions stored in the memory to perform functions or operations with one or more of components communicatively coupled to the one or more processors and the memory, such as measuring the voltage and storing voltage time series data in the memory or transmitting to the central office816or to the communication module810of the system800. Depending on the exact configuration of the electricity meter812, the memory may be volatile, such as RAM, non-volatile, such as ROM, flash memory, miniature hard drive, memory card, and the like, or some combination thereof. The memory may store computer-executable instructions that are executable by the processors. The electricity meter812may receive instructions from the central office816regarding the preselected collection time period and the preselected interval, for example, changing the collection time period to two years and the interval to two minutes. Some or all operations of the methods described above can be performed by execution of computer-readable instructions stored on a computer-readable storage medium, as defined below. The terms “computer-readable medium,” “computer-readable instructions,” and “computer executable instruction” as used in the description and claims, include routines, applications, application modules, program modules, programs, components, data structures, algorithms, and the like. Computer-readable and -executable instructions can be implemented on various system configurations, including single-processor or multiprocessor systems, minicomputers, mainframe computers, personal computers, hand-held computing devices, microprocessor-based, programmable consumer electronics, combinations thereof, and the like. The computer-readable storage media may include volatile memory (such as random-access memory (RAM)) and/or non-volatile memory (such as read-only memory (ROM), flash memory, etc.). The computer-readable storage media may also include additional removable storage and/or non-removable storage including, but not limited to, flash memory, magnetic storage, optical storage, and/or tape storage that may provide non-volatile storage of computer-readable instructions, data structures, program modules, and the like. A non-transitory computer-readable storage medium is an example of computer-readable media. Computer-readable media includes at least two types of computer-readable media, namely computer-readable storage media and communications media. Computer-readable storage media includes volatile and non-volatile, removable and non-removable media implemented in any process or technology for storage of information such as computer-readable instructions, data structures, program modules, or other data. Computer-readable storage media includes, but is not limited to, phase change memory (PRAM), static random-access memory (SRAM), dynamic random-access memory (DRAM), other types of random-access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, compact disk read-only memory (CD-ROM), digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information for access by a computing device. In contrast, communication media may embody computer-readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave, or other transmission mechanism. As defined herein, computer-readable storage media do not include signals such as communication media. The computer-readable instructions stored on one or more non-transitory computer-readable storage media, when executed by one or more processors, may perform operations described above with reference toFIGS.4-7. Generally, computer-readable instructions include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular abstract data types. The order in which the operations are described is not intended to be construed as a limitation, and any number of the described operations can be combined in any order and/or in parallel to implement the processes. CONCLUSION Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as exemplary forms of implementing the claims.
21,126
11860211
DETAILED DESCRIPTION FIG.1illustrates an example environment in which there is provided a vehicle comprising a system for monitoring a high voltage electrical system, in accordance with some examples of the disclosure. In the example shown inFIG.1, there is a vehicle100. The vehicle100may be any type of automotive vehicle that is capable of being powered by a battery and/or a fuel cell, including, for example, a car, a motorcycle, a van, a lorry, a tractor and/or an aircraft. The vehicle100is powered by an energy source that comprises a high voltage system102, for example, the vehicle100may be battery powered and/or powered by a fuel cell. The high voltage system102is in communication with an interface circuit104that, on detecting whether the high voltage system102is at a desired voltage, for example at or below 60 V, outputs a signal indicating that the high voltage system102is at a desired voltage. This signal is received by the control electronics106that, on receiving the signal from the interface circuit104, outputs an indication that the high voltage system102is at a desired voltage. FIG.2illustrates a schematic circuit diagram of a system for monitoring a high voltage electrical system, in accordance with some examples of the disclosure. The schematic is exemplary of the architecture, which comprises a high voltage domain200, the high voltage domain200comprising an interface circuit202, and the interface circuit202comprising a high voltage constant current source204, a voltage threshold detector206and an electrical isolation circuit208to decouple the output signal from the high voltage. The interface circuit202is connected to a low voltage computing device210and the interface circuit202outputs a signal, indicative of whether the voltage at or below a desired voltage, to the low voltage computing device210. In some examples, the interface circuit202and computing device210may be used to support diagnostics of components that might otherwise be difficult to diagnose, for example, low potential contactor or bus discharge circuitry. In another example, the interface circuit202and the computing device210may be used as a safety backup in-case of loss of communication with analog sensors on the system, such as voltage monitors. FIG.3illustrates another schematic circuit diagram of a system for monitoring a high voltage electrical system, in accordance with some examples of the disclosure. The schematic is exemplary of the architecture, which comprises a high voltage constant current source (for example,302,304,306,308), a voltage threshold detector configured to output a signal indicating whether the voltage of a high voltage component is at, or below, a desired voltage (for example,310a,310b,312), and an electrical isolation circuit to decouple the output signal from the high voltage (for example,314,316). A high voltage DC (i.e., the bus voltage) is produced at300. The high voltage DC source is connected to a high voltage transistor302, such as a high voltage field-effect transistor (FET) (e.g., a high voltage N-channel metal-oxide-semiconductor field-effect transistor (N-MOSFET)), that is capable of withstanding the high voltage received at300, for example an N-MOSFET with a drain-source voltage of 1000 V. A resistor304, such as a 2 Mega Ohm resistor, provides the gate voltage for the transistor302. The gate voltage is controlled by a low voltage negative-positive-negative (NPN) bipolar junction transistor306, e.g., so that the voltage drop across resistor308is a constant voltage, e.g., a voltage that falls between the base and the emitter of the bipolar junction transistor (Vbe). This constant voltage leads to a constant current, for example, approximately 5 mA. In this example circuit, there is a first 30 V Zener diode310aand a second 20 V Zener diode310bthat, together, emulate a single 50 V Zener diode. In other examples, a single 50 V Zener diode may be used. In other examples, an arrangement that prevents current from flowing below a threshold voltage may be implemented instead of, or in addition to, a Zener diode, or diodes. In an example system, this threshold voltage may be any value lower than the bus voltage including between 50 V to 60 V. The voltage of the Zener diode sets a desired voltage of the high voltage system in the vehicle, below which it is indicated that the high voltage system is at, or below, a desired voltage. In some examples, for example, where a 50 V Zener diode is used, circuit tolerances are taken into account to ensure all circuits report at or below the desired voltage. The Zener diodes310a,310bare in communication with a diode312that protects the light emitting diode (LED) of opto-isolator314from reverse voltage in the case of rapid discharge of the bus. The opto-isolator314isolates the high voltage system from the control electronics316and provides a low-voltage digital output to the control electronics316, when the voltage of the high voltage system is at, or below, a desired voltage. FIG.4illustrates a schematic graph of input and output voltages of a system for monitoring a high voltage electrical system, in accordance with some examples of the disclosure. The graph400depicts the bus voltage that ranges, for example, from 0 V402to 0.8 KV404(left axis) and the current through the opto-isolator LED that ranges, for example, from 0 A406to 6.5 mA408(right axis). When the bus voltage is at, or below, for example, 60 V, the current through the opto-isolator LED is zero and when the bus voltage is above, for example, 60 V, the current through the opto-isolator LED is, for example, 5.5 mA to 6.5 mA. The graph410depicts the digital output of the opto-isolator that ranges from, for example, ˜0 V412to 5 V414. The digital output corresponds to the current running through the LED, such that when the high voltage system is at, or below, for example, 60 V, a digital signal is output at the opto-isolator in conjunction with the pull-up resistor in the control module316, indicating that the high voltage system is operating as desired, e.g., within an expected operating voltage range. If the high voltage system is above, for example, 60 V, the digital signal indicates the presence of high voltage in the high voltage system. FIG.5illustrates a schematic circuit diagram of a system for monitoring a high voltage electrical system and configured to provide a fast bus discharge, in addition to detection of the voltage state, in accordance with some examples of the disclosure. The circuit ofFIG.5is broadly similar to that ofFIG.3. The schematic is exemplary of the architecture, which comprises a switch (for example,502,504), a high voltage constant current source (for example,508,510,512,514), a voltage threshold detector configured to output a signal indicating whether the voltage of a high voltage component is at, or below, a desired voltage (for example,516a,516b,518), and an electrical isolation circuit to decouple the output signal from the high voltage (for example,520,522). As before, a high voltage DC (i.e., the bus voltage) is produced at500, and the high voltage DC source500is connected to a high voltage transistor508. Again, a resistor510provides the gate voltage for the transistor508. The gate voltage is controlled by a low voltage NPN bipolar junction transistor512so that the voltage drop across resistor514is a constant Vbe. This constant voltage leads to a constant current, for example, approximately 5 mA. In this example circuit, there is a first 30 V Zener diode516aand a second 20 V Zener diode516bthat, together, emulate a single 50 V Zener diode. The Zener diodes516a,516bare in communication with a diode518that protects the light emitting diode (LED) of opto-isolator520from reverse voltage in the case of rapid discharge of the bus. The opto-isolator520isolates the high voltage system from the control electronics522and provides a low-voltage digital output to the control electronics522, when the voltage of the high voltage system is at, or below, a desired voltage. In addition, a contactor502and a bus capacitor506have been added to the circuit. The contactor is opened and closed via power circuit504. The contactor switch is arranged to open at, for example, time=1 second. On opening the switch, the circuit discharges the capacitor506using a constant, for example, 5 mA. The control circuit522may be configured to open the contactor and, for example, may be used to determine the discharge duration and whether a desired voltage is present via the opto-isolator520. FIG.6illustrates a schematic graph of input and output voltages of a system for monitoring a high voltage electrical system and configured to provide a fast bus discharge, in accordance with some examples of the disclosure. Graph600shows the bus voltage falling from, for example, 750 V602to 45 V604over, for example, 1.6 seconds. In this example, the contactor is opened at 1 seconds, which is when the voltage starts falling as the voltage is discharged. The graph606depicts the current through the opto-coupler falling from, for example, 3.8 mA608to 0 mA610as the circuit discharges. The output signal from the opto-coupler increases from, for example, ˜0 V612to 5 V614, indicating that circuit has discharged to a desired voltage. As can be seen, the digital output only indicates that the circuit has discharged to a desired voltage when the voltage reaches a threshold amount, such as, in this example, 45 V. FIG.7illustrates a schematic high voltage architecture, in accordance with some examples of the disclosure. High voltage architecture700comprises a high voltage module702, a battery electric center (BEC)704, a step-up DC-DC module706, an electric motor and invertor708and a high voltage control module710. Components within these modules are monitored by interface circuits, such as those described in connection withFIGS.2and3above, that output an isolated low voltage digital signal, indicating whether the voltage of the high voltage component is at, or below, a desired voltage, to the high voltage control module710. The high voltage module702comprises a high voltage source712, high voltage connections714a,714b, and interface circuit components716a,716b,716c,716dthat each output an isolated low voltage digital supply signal718a,718b,718c,718d, indicating whether the voltage of the high voltage component is at, or below, a desired voltage, to the high voltage control module710. In this example, the interface circuit components716a,716b,716c,716dmonitor the high voltage source, the pre-charge components and the switches associated with the high voltage module702. The BEC704comprises an accessory output720and a charging input722that are connected724,726to the high voltage connection. The interface circuit components728a,728bmonitor the accessory output720and the charging input722and output an isolated low voltage digital supply signal730a,730b, indicating whether the voltage of the high voltage component is at, or below, a desired voltage, to the high voltage control module710. The step-up DC-DC module706comprises an input capacitance732and an output capacitance734connected to the high voltage connections. An interface circuit component736monitors the components and outputs an isolated low voltage digital supply signal738, indicating whether the voltage of the high voltage component is at, or below, a desired voltage, to the high voltage control module710. The interface circuit may also perform the role of bus discharge to render the system safe for maintenance work. The electric motor and inverter708are connected740to the high voltage connectors and are monitored by an interface circuit component742. The interface circuit component monitors the electric motor and inverter and outputs an isolated low voltage digital supply signal744, indicating whether the voltage of the high voltage component is at, or below, a desired voltage, to the high voltage control module710. The interface circuit may also perform the role of bus discharge to render the system safe for maintenance work. In some examples, the output from the interface circuit may be used as a secondary signal to validate other sensors on a high voltage system. FIG.8illustrates a block diagram representing components of a computing device and data flow therebetween for a system for monitoring a high voltage electrical system, in accordance with some examples of the disclosure. Computing device800comprises control circuitry802, input circuitry806, and an output module822. Control circuitry802may be based on any suitable processing circuitry (not shown) and may comprise control circuits and memory circuits, which may be disposed on a single integrated circuit or may be discrete components and processing circuitry. Some control circuits may be implemented in hardware, firmware, or software. An input804, for example, an input from a high voltage component of a vehicle, is received at the input circuitry806and is transmitted808to the control circuitry802. The control circuitry802comprises a high voltage receiving module810, a voltage filtering module814, an isolating module818and an output module822. At the high voltage receiving module810, an input is received and is transmitted812to the voltage filtering module814. At the voltage filtering module814, an output is transmitted816to the isolating module818, if the received voltage is above a threshold amount (i.e., a desired voltage). If the received voltage is below the threshold amount, the voltage filtering module814does not transmit an output to the isolating module818. At the isolating module818the presence, or lack of, an input is used to determine whether to output a digital signal indicating whether the input804is at a desired voltage. If the input804is at a desired voltage, the isolating module818transmits820a digital signal to the output module822where the desired voltage indicating module824generates an output indicating that the input (and hence the, for example, high voltage component) is at a threshold voltage, such as, for example, a desired voltage of 60 V. FIG.9illustrates a flowchart of illustrative steps involved in a system for monitoring a high voltage electrical system, in accordance with some examples of the disclosure. In some examples, process900may run on a computing device. In other examples, process900may be implemented in discrete circuitry. At902an input at a first voltage is received from a high voltage component. At904, it is identified whether the first voltage is above or at/below a threshold voltage, for example, a desired voltage of 60 V. If the voltage is at/below the threshold voltage, the input is isolated from the output circuit906and at908a digital signal is generated that indicates that the input voltage is desired. If the voltage is above the threshold voltage, the input is isolated from the output circuit910and at912a digital signal is generated that indicates that the input voltage is not desired. While the present disclosure is described with reference to particular example applications, it will be appreciated that the disclosure is not limited hereto and that particular combinations of the various features described and defined in any aspects can be implemented and/or supplied and/or used independently. It will be apparent to those skilled in the art that various modifications and improvements may be made without departing from the scope and spirit of the present disclosure. Those skilled in the art would appreciate that the actions of the processes discussed herein may be omitted, modified, combined, and/or rearranged, and any additional actions may be performed without departing from the scope of the disclosure. Any system features as described herein may also be provided as a method feature and vice versa. As used herein, means plus function features may be expressed alternatively in terms of their corresponding structure. It shall be further appreciated that the systems and/or methods described above may be applied to, or used in accordance with, other systems and/or methods. Any feature in one aspect may be applied to other aspects, in any appropriate combination. In particular, method aspects may be applied to system aspects, and vice versa. Furthermore, any, some and/or all features in one aspect can be applied to any, some and/or all features in any other aspect, in any appropriate combination.
16,461
11860212
DETAILED DESCRIPTION A data analysis application126(shown referring toFIG.1) performs analysis of grid data and sensor data to identify grid devices that are probable emission sources of sensor measurements that may indicate a degraded or degrading grid device. The grid refers to the electricity transmission network that provides electricity from various power generation systems to electricity demand locations through substations and overhead and buried cables as understood by a person of skill in the art. The above ground cable networks include poles of various materials and sizes to which grid devices are mounted and connected to each other through conductive wires or overhead lines. As used herein, an asset may refer to a discrete grid device. An example device to generate the sensor data is the Trekker™ sensor manufactured by Exacter, Inc. of Columbus, Ohio USA. For example, U.S. Pat. Nos. 7,912,660, 7,577,535, and 10,209,291 describe characteristics of the Trekker sensor. In an illustrative embodiment, a sensor113(shown referring toFIG.1) is mounted on a vehicle, such as a garbage truck, a postal truck, a delivery truck, etc., that travels a regular route adjacent to portions of the electrical grid. For example, electrical grid devices, such as a transformer, a fuse, a pole, a recloser, a switch, an insulator, a capacitor, etc. are typically located near a road such that a vehicle mounted sensor113can detect measurements from the electrical grid devices. The route may be daily, weekly, monthly, etc. In an illustrative embodiment, the sensor periodically detects RF emissions. For example, each second, sensor113may obtain a measurement of the local electromagnetic radiation field in the RF frequency band to determine whether an RF emission is detected from a grid device as opposed to a different type of device, such as an RF transmitter. For illustration, the RF frequency band is from 3 Hertz (Hz) to 3,000 Gigahertz In alternative embodiments, sensor113may be designed to measure other physical phenomena such as ultrasound, a microphone, or a different electromagnetic radiation frequency band including IR in addition to the RF emissions or in the alternative. A measure of a strength or intensity of the detected signal may be computed. For example, the Trekker sensor computes a maintenance merit value (MMV) as a measure of the signal intensity and sets an emission source flag value that indicates whether the emission likely came from a grid device as opposed to a different type of device. Sensor113may refer to a device that includes a plurality of sensors. For example, sensor113may also include a global positioning system (GPS) sensor that determines a location, such as the geodetic coordinates at which each sensor measurement is taken. Sensor113further may include additional types of sensors such as environmental sensors that measure a barometric pressure, a temperature, a relative humidity, air contaminants, etc. A plurality of sensors113may be used to monitor a predefined portion of the grid. For illustration, the predefined portion of the grid may be associated with a municipality or an electricity provider. For example, a different sensor113may be mounted on each of a plurality of vehicles that drive a predefined route each day, week, month, etc. Referring toFIG.10, a block diagram of a system solution to identify potential issues on the distribution grid is shown in accordance with an illustrative embodiment. The system solution may include grid data1000, maintenance data1002, sensor data1004, a data mapping process1006, a define initial parameters process1008, a build circuit model process1010, a compute outage summary data process1012, a create outage summary GUI tab1014, a clean and/or normalize data process1016, a define map overlay process1018, a create sensor route GUI tab1020, a detect events process1022, a location matching process1024, a define emission source status process1026, a prioritize assets process1028, a create point in time GUI tab1030, and a create emission source GUI tab1032. Grid data1000, maintenance data1002, and sensor data1004may be acquired from different sources and may use different formats. Sensor readings may be collected and uploaded on a predefined timeframe such as daily. Utilities may provide geographic information system (GIS) data for their circuit in grid data1000that may include one or more discrete asset locations. The utilities may provide outage information related to their circuit in maintenance data1002. Data mapping process1006reads and processes grid data1000and maintenance data1002into a format for further processing. Define initial parameters process1008may access utility specific data parameters such as a projection method to convert between earth-centered, earth-fixed (ECEF) coordinates and geodetic coordinates, a definition of a date format, exclusion filters, predefined distances for clustering, etc. Geodetic coordinates are defined using a latitude, a longitude, and an altitude. Build circuit model process1010reads asset type data from grid data1000and combines the data to create circuit model data. Assets may be grouped using a clustering process for disjoint cluster analysis to identify geographical regions of interest. The asset locations may be converted from geodetic coordinates to ECEF coordinates and a predefined cluster radius, such as 50 feet, applied to group the assets so that a single cluster may be used to represent a plurality of individual assets. Compute outage summary data process1012may join the circuit model data with equipment related outages from maintenance data1002to compute a median time to repair used to estimate customer minutes of interruption (CMI) if an equipment failure occurs. Major event days (MED) may be excluded from the outage summary data. Create outage summary GUI tab1014may create a GUI window that visually presents the computed outage summary data. For example, as discussed further below, a third user interface window500(shown referring toFIGS.5,6,7,8A,8A(continued),8B, and8B (continued)) in accordance with an illustrative embodiment) may present an outage history tab502shown referring toFIG.5. A cluster centroid may be computed to plot the assets on a map as a group to reduce the amount of data. A unique source identifier may be assigned to each cluster. Clean and/or normalize data process1016may clean and/or normalize grid data1000, maintenance data1002, and/or sensor data1004for further processing. Define map overlay process1018combines sensor location data from sensor data1004that is overlaid on the circuit model. Create sensor route GUI tab1020may create a GUI window that visually presents the sensor route data. For example, as discussed further below, third user interface window500may present a vehicle route tab504shown referring toFIG.6. Detect events process1022may group and/or filter signals from sensor data1004. For example, signals obtained during a specific time period may be grouped when successive measurements are within a predefined distance of each other and associated with a single event. A signal intensity may be associated with the grouped signals based on a highest signal intensity among the included successive measurements. Various rules may be applied to identify an event. Location matching process1024may match events with assets in the circuit model. Groups that meet the minimum persistence criterion and are within a fourth predefined distance based on a detection range of the sensor of an asset in the circuit model may be filtered to define an asset event list. For example, a detection range may be defined for the sensor based on a minimum detectable signal-to-noise ratio defined for the sensor that is an antenna as understood by a person of skill in the art. For example, event groups may be defined as part of the location matching process to geolocate signals in aggregate similar to triangulating signals to a geographic location. Define emission source status process1026may identify emission sources. The assets identified across time intervals in the asset event list may be compared and flagged as active, inactive, or new. The flag may be associated with the source identifier to maintain a history of activity. In an illustrative embodiment, a current active list, a current inactive list, a current new list, an historical active list, an historical inactive list, and an historical new list are maintained to include a list of devices with the associated status. The current active list, the current inactive list, and the current new list may be maintained for assets with the associated status over a predefined current time period such as four weeks. The current active list, the current inactive list, and the current new list may be maintained in a current table. The historical active list, the historical inactive list, and the historical new list may be maintained for assets with the associated status over the entire time period during which the assets have been monitored. The historical active list, the historical inactive list, and the historical new list may be maintained in an historical table. In some cases, an asset may change from active to inactive to new in various permutations. This history is maintained in the historical table. The source identifier that indicates the cluster to which each asset belongs may also be maintained in the current table and the historical table. For a first-time interval, the source identifiers identified as an emission source may be flagged as new. For the remaining intervals, the source identifiers may be compared with the asset event list for one or more previous intervals and flagged as active or inactive. For example, when a source identifier is found in both a new asset event list and a previous asset event list, the source identifier may be flagged as active; when the source identifier is not found in both the new asset event list and the previous asset event list, whether the sensor passed the region associated with the asset(s) identified by the source identifier may be determined. When the sensor passed the region associated with the asset(s), the source identifier may be flagged as inactive; otherwise, the source identifier may be flagged as active. In this manner, when the sensor does not pass a region associated with the asset(s), the asset is not classified as inactive until the sensor passes the region associated with the asset(s) and no longer determines that the asset(s) constitute an emission source. Assets may not be dropped from the active list when the sensor did not pass the region associated with the asset because the asset may still be emitting though a signal was not captured. A time to monitor coverage may be managed by a minimum interval value and a maximum interval value. For example, the minimum interval value may be defined as four weeks and the maximum interval value may be defined as six weeks to see if an asset is still emitting and meets the pass criterion to be considered persistent and flagged as an active or new emission source. The interval may be selected to be sufficient to allow resumption of a regular route that may have been disrupted. The minimum interval value may be used to determine when an asset is moved from the active list to the inactive list. The maximum interval value may be used to determine when an asset is removed from the inactive list. Coverage analysis on weekly data may be used to determine when there were enough passes. The number of passes may be determined by checking unique dates within the interval irrespective of the number of vehicles that made the pass or whether a qualified signal was detected. For illustration, when the number of passes is greater than or equal to three, the source identifier may be flagged as inactive; when the number of passes is less than three, the source identifier may be flagged as active. Prioritize assets process1028may prioritize assets based on an estimated impact and whether it is a critical device within a protective zone. A protective zone is an area serviced by a protective device such as a recloser, a fuse, etc. For example, the estimated impact may be determined based on a number of customers that would be affected if the probable emission source experienced a failure leading to an outage by the asset and/or the CMI computed by multiplying the number of customers that would be affected by a median time to repair the asset. Create point in time GUI tab1030may create a GUI window that visually presents the point in time data. For example, as discussed further below, third user interface window500may present a point in time tab506shown referring toFIG.7. The asset event list for all time intervals may be used to update the point in time data included in point in time tab506. Create emission source GUI tab1032may create a GUI window that visually presents the emission source data. For example, as discussed further below, third user interface window500may present a probable emission source tab508shown referring toFIGS.8A,8A(continued),8B, and8B (continued). The asset event list for a most recent time interval may be used to update the current emission sources included in probable emission source tab508. Referring toFIG.1, a block diagram of a data analysis device100is shown in accordance with an illustrative embodiment. Data analysis device100may include an input interface102, an output interface104, a communication interface106, a non-transitory computer-readable medium108, a processor110, utility data transformation application122, sensor data transformation application124, data analysis application126, and one or more output datasets128. Fewer, different, and/or additional components may be incorporated into data analysis device100. Input interface102provides an interface for receiving information from the user or another device for entry into data analysis device100as understood by those skilled in the art. Input interface102may interface with various input technologies including, but not limited to, a keyboard112, sensor113, a mouse114, a display116, a track ball, a keypad, one or more buttons, etc. to allow the user to enter information into data analysis device100or to make selections presented in a user interface displayed on display116or to allow a device to provide data to data analysis device100. The same interface may support both input interface102and output interface104. For example, display116comprising a touch screen provides a mechanism for user input and for presentation of output to the user. Data analysis device100may have one or more input interfaces that use the same or a different input interface technology. The input interface technology further may be accessible by data analysis device100through communication interface106. Output interface104provides an interface for outputting information for review by a user of data analysis device100and/or for use by another application or device. For example, output interface104may interface with various output technologies including, but not limited to, display116, a speaker118, a printer120, etc. Data analysis device100may have one or more output interfaces that use the same or a different output interface technology. The output interface technology further may be accessible by data analysis device100through communication interface106. Communication interface106provides an interface for receiving and transmitting data between devices using various protocols, transmission technologies, and media as understood by those skilled in the art. Communication interface106may support communication using various transmission media that may be wired and/or wireless. Data analysis device100may have one or more communication interfaces that use the same or a different communication interface technology. For example, data analysis device100may support communication using an Ethernet port, a Bluetooth® antenna, a Wi-Fi antenna, a telephone jack, a USB port, etc. Data and/or messages may be transferred between data analysis device100and another computing device of a distributed computing system130using communication interface106. Computer-readable medium108is an electronic holding place or storage for information so the information can be accessed by processor110as understood by those skilled in the art. Computer-readable medium108can include, but is not limited to, any type of random access memory (RAM), any type of read only memory (ROM), any type of flash memory, etc. such as magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips, . . . ), optical disks (e.g., compact disc (CD), digital versatile disc (DVD), . . . ), smart cards, flash memory devices, etc. Data analysis device100may have one or more computer-readable media that use the same or a different memory media technology. For example, computer-readable medium108may include different types of computer-readable media that may be organized hierarchically to provide efficient access to the data stored therein as understood by a person of skill in the art. As an example, a cache may be implemented in a smaller, faster memory that stores copies of data from the most frequently/recently accessed main memory locations to reduce an access latency. Data analysis device100also may have one or more drives that support the loading of a memory media such as a CD, DVD, an external hard drive, etc. One or more external hard drives further may be connected to data analysis device100using communication interface106. Processor110executes instructions as understood by those skilled in the art. The instructions may be carried out by a special purpose computer, logic circuits, or hardware circuits. Processor110may be implemented in hardware and/or firmware. Processor110executes an instruction, meaning it performs/controls the operations called for by that instruction. The term “execution” is the process of running an application or the carrying out of the operation called for by an instruction. The instructions may be written using one or more programming languages, scripting languages, assembly languages, etc. Processor110operably couples with input interface102, with output interface104, with communication interface106, and with computer-readable medium108to receive, to send, and to process information. Processor110may retrieve a set of instructions from a permanent memory device and copy the instructions in an executable form to a temporary memory device that is generally some form of RAM. Data analysis device100may include a plurality of processors that use the same or a different processing technology. Some machine-learning approaches may be more efficiently and speedily executed and processed with machine-learning specific processors (e.g., not a generic central processing unit (CPU)). Such processors may also provide additional energy savings when compared to generic CPUs. For example, some of these processors can include a graphical processing unit (GPU), an application-specific integrated circuit, a field-programmable gate array, an artificial intelligence accelerator, a purpose-built chip architecture for machine learning, and/or some other machine-learning specific processor that implements a machine learning approach using semiconductor (e.g., silicon, gallium arsenide) devices. These processors may also be employed in heterogeneous computing architectures with a number of and a variety of different types of cores, engines, nodes, and/or layers to achieve energy efficiencies, processing speed improvements, data communication speed improvements, and/or data efficiency targets and improvements throughout various parts of the system. Utility data transformation application122performs operations associated with transforming data that describes various grid devices, grid customers, grid device outages, grid maintenance data, etc. Some or all of the operations described herein may be embodied in utility data transformation application122. The operations may be implemented using hardware, firmware, software, or any combination of these methods. For example, utility data transformation application122may perform data mapping process1006, define initial parameters process1008, build circuit model process1010, compute outage summary data process1012, and/or create outage summary GUI tab1014. Referring to the example embodiment ofFIG.1, utility data transformation application122is implemented in software (comprised of computer-readable and/or computer-executable instructions) stored in computer-readable medium108and accessible by processor110for execution of the instructions that embody the operations of utility data transformation application122. Utility data transformation application122may be written using one or more programming languages, assembly languages, scripting languages, etc. Utility data transformation application122may be integrated with other analytic tools. As an example, utility data transformation application122may be part of an integrated data analytics software application and/or software architecture such as that offered by SAS Institute Inc. of Cary, North Carolina, USA. Merely for illustration, utility data transformation application122may be implemented using or integrated with one or more SAS software tools such as JMP®, Base SAS, SAS® Enterprise Miner™, SAS® Event Stream Processing, SAS/STAT®, SAS® High Performance Analytics Server, SAS® Visual Data Mining and Machine Learning, SAS® LASR™, SAS® In-Database Products, SAS® Scalable Performance Data Engine, SAS® Cloud Analytic Services (CAS), SAS/ORO, SAS/ETS®, SAS® Visual Analytics, SAS® Viya™, SAS In-Memory Statistics for Hadoop®, etc. all of which are developed and provided by SAS Institute Inc. of Cary, North Carolina, USA. Data mining, statistical analytics, and response prediction are practically applied in a wide variety of industries to solve technical problems. Utility data transformation application122may be implemented as a Web application. For example, utility data transformation application122may be configured to receive hypertext transport protocol (HTTP) responses and to send HTTP requests. The HTTP responses may include web pages such as hypertext markup language (HTML) documents and linked objects generated in response to the HTTP requests. Each web page may be identified by a uniform resource locator (URL) that includes the location or address of the computing device that contains the resource to be accessed in addition to the location of the resource on that computing device. The type of file or resource depends on the Internet application protocol such as the file transfer protocol, HTTP, H.323, etc. The file accessed may be a simple text file, an image file, an audio file, a video file, an executable, a common gateway interface application, a Java® applet, an extensible markup language (XML) file, or any other type of file supported by HTTP. Sensor data transformation application124performs operations associated with transforming data generated by one or more of sensors113. Some or all of the operations described herein may be embodied in sensor data transformation application124. Similar to utility data transformation application122, sensor data transformation application124may be implemented as a Web application. For example, sensor data transformation application124may perform clean and/or normalize data process1016, define map overlay process1018, create sensor route GUI tab1020, and/or detect events process1022. Referring to the example embodiment ofFIG.1, sensor data transformation application124is implemented in software (comprised of computer-readable and/or computer-executable instructions) stored in computer-readable medium108and accessible by processor110for execution of the instructions that embody the operations of sensor data transformation application124. Sensor data transformation application124may be written using one or more programming languages, assembly languages, scripting languages, etc. Similar to utility data transformation application122, sensor data transformation application124may be integrated with other analytic tools such as the integrated data analytics software application and/or software architecture offered by SAS Institute Inc. of Cary, North Carolina, USA. For example, sensor113may include one or more sensors of various types that produce a sensor signal value referred to as a measurement data value representative of a measure of a physical quantity in an environment to which the sensor is associated and that generate a corresponding measurement datum that typically is associated with a time that the measurement datum is generated. The environment to which the sensor is associated for monitoring may include the electrical power grid system referred to herein as the grid. Example sensor types include a pressure sensor, a temperature sensor, a position or location sensor, a velocity sensor, an acceleration sensor, a fluid flow rate sensor, a voltage sensor, a current sensor, a frequency sensor, a phase angle sensor, a data rate sensor, a humidity sensor, an acoustic sensor, a light sensor, a motion sensor, an electromagnetic field sensor, a force sensor, a torque sensor, a load sensor, a strain sensor, a chemical property sensor, a resistance sensor, a radiation sensor, an irradiance sensor, a proximity sensor, a distance sensor, a vibration sensor, etc. that may be mounted to various devices, such as a vehicle. The devices themselves may include one or more sensors and/or may be connected to one or more other devices to receive a measurement datum or to send a measurement datum to another device. For example, the Trekker sensor may connect to a cellular network to upload data to another computing device for storage of the generated sensor data remote from the device. Data analysis application126performs operations associated with analyzing the transformed utility and sensor data. Some or all of the operations described herein may be embodied in data analysis application126. Similar to utility data transformation application122, data analysis application126may be implemented as a Web application. For example, data analysis application126may perform location matching process1024, define emission source status process1026, prioritize assets process1028, create point in time GUI tab1030, and/or create emission source GUI tab1032. Referring to the example embodiment ofFIG.1, data analysis application126is implemented in software (comprised of computer-readable and/or computer-executable instructions) stored in computer-readable medium108and accessible by processor110for execution of the instructions that embody the operations of sensor data transformation application124. Data analysis application126may be written using one or more programming languages, assembly languages, scripting languages, etc. Similar to utility data transformation application122, data analysis application126may be integrated with other analytic tools such as the integrated data analytics software application and/or software architecture offered by SAS Institute Inc. of Cary, North Carolina, USA. Utility data transformation application122, sensor data transformation application124, and data analysis application126may be integrated in various manners to form one or more applications executable by a user. The sensor and utility data that are transformed may be stored in one or more locations on data analysis device100and/or on one or more devices of distributed computing system130. The sensor and utility data may be stored using various data structures as known to those skilled in the art including one or more files of a file system, a relational database, one or more tables of a system of tables, a structured query language database, one or more SAS® datasets, etc. on data analysis device100or on distributed computing system130. For example, the sensor and utility data may be stored in various files, databases, datasets, etc. referred to herein as datasets for simplicity. Each dataset of the sensor and utility data may include, for example, a plurality of rows and a plurality of columns. The plurality of rows may be referred to as observation vectors or records (observations), and the columns may be referred to as variables. In an alternative embodiment, the sensor and utility data may be transposed. In data science, engineering, and statistical applications, data often consists of measurements (across sensors, characteristics, responses, etc.) collected across multiple time instances. These measurements may be collected in the sensor and utility data for analysis and processing or streamed to data analysis device100as it is generated. The sensor and utility data may include data captured as a function of time for one or more sensors113. The data stored in the sensor and utility data may be captured at different time points, periodically, intermittently, when an event occurs, etc. The sensor and utility data may include data captured at a high data rate such as 200 or more observation vectors per second for one or more sensors113. One or more columns of the sensor and utility data may include a time and/or date value referred to herein as a timestamp. The sensor and utility data may include data captured under normal and abnormal operating conditions of the physical object. The data stored in the sensor and utility data may be received directly or indirectly from sensor113and may or may not be pre-processed in some manner. For example, the data may be pre-processed using an event stream processor such as the SAS® Event Stream Processing Engine (ESPE), developed and provided by SAS Institute Inc. of Cary, North Carolina, USA. For example, data stored in the sensor and utility data may be generated as part of the Internet of Things (IoT), where things (e.g., machines, devices, phones, sensors) can be connected to networks and the data from these things collected and processed within the things and/or external to the things before being stored in the sensor and utility data. For example, the IoT can include sensors in many different devices and types of devices, and high value analytics can be applied to identify hidden relationships and drive increased efficiencies. This can apply to both big data analytics and real-time analytics. Some of these devices may be referred to as edge devices, and may involve edge computing circuitry. Again, some data may be processed with an ESPE, which may reside in the cloud or in an edge device before being stored in the sensor and utility data. The data stored in the sensor and utility data may include any type of content represented in any computer-readable format such as binary, alphanumeric, numeric, string, markup language, etc. The content may include textual information, graphical information, image information, audio information, numeric information, etc. that further may be encoded using various encoding techniques as understood by a person of skill in the art. The sensor and utility data may be stored on computer-readable medium108and/or on one or more computer-readable media of distributed computing system130and accessed by data analysis device100using communication interface106, input interface102, and/or output interface104. The sensor and utility data may be stored in various compressed formats such as a coordinate format, a compressed sparse column format, a compressed sparse row format, etc. The data may be organized using delimited fields, such as comma or space separated fields, fixed width fields, using a SAS® dataset, etc. The SAS dataset may be a SAS® file stored in a SAS® library that a SAS® software tool creates and processes. The SAS dataset contains data values that are organized as a table of observation vectors (rows) and variables (columns) that can be processed by one or more SAS software tools. Data analysis device100may coordinate access to the sensor and utility data that is distributed across distributed computing system130that may include one or more computing devices. For example, the sensor and utility data may be stored in one or more cubes distributed across a grid of computers as understood by a person of skill in the art. As another example, the sensor and utility data may be stored in a multi-node Hadoop® cluster. For instance, Apache™ Hadoop® is an open-source software framework for distributed computing supported by the Apache Software Foundation. As another example, the sensor and utility data may be stored in a cloud of computers and accessed using cloud computing technologies, as understood by a person of skill in the art. The SAS® LASR™ Analytic Server may be used as an analytic platform to enable multiple users to concurrently access data stored in the sensor and utility data. The SAS Viya open, cloud-ready, in-memory architecture also may be used as an analytic platform to enable multiple users to concurrently access data stored in the sensor and utility data. SAS CAS may be used as an analytic server with associated cloud services in SAS Viya. Some systems may use SAS In-Memory Statistics for Hadoop® to read big data once and analyze it several times by persisting it in-memory for the entire session. Some systems may be of other types and configurations. Referring toFIG.2, example operations associated with sensor data transformation application124are described. Additional, fewer, or different operations may be performed depending on the embodiment of sensor data transformation application124. The order of presentation of the operations ofFIG.2is not intended to be limiting. Some of the operations may not be performed in some embodiments. Although some of the operational flows are presented in sequence, the various operations may be performed in various repetitions and/or in other orders than those that are illustrated. For example, a user may execute sensor data transformation application124, which causes presentation of a first user interface window, which may include a plurality of menus and selectors such as drop-down menus, buttons, text boxes, hyperlinks, etc. associated with sensor data transformation application124as understood by a person of skill in the art. The plurality of menus and selectors may be accessed in various orders. An indicator may indicate one or more user selections from a user interface, one or more data entries into a data field of the user interface such as a text box or a control window, one or more data items read from computer-readable medium108, or otherwise defined with one or more default values, etc. that are received as an input by sensor data transformation application124. The operations of sensor data transformation application124further may be performed in parallel using a plurality of threads and/or a plurality of worker computing devices. In an operation200, a first indicator may be received that indicates new sensor data generated by sensor113and stored in sensor data1004. Sensor113may refer to one or more sensors of the same or different type. For example, the first indicator indicates a location and a name of the new sensor data. As an example, the first indicator may be received by sensor data transformation application124after selection from a user interface window or after entry by a user into a user interface window. In an alternative embodiment, the new sensor data may not be selectable. For example, a most recently created dataset may be used automatically. The new sensor data may be captured each second in an illustrative embodiment. The new sensor data may have been captured over a predefined period of time to obtain sensor measurements from each sensor113over a predefined set of locations. For example, the predefined set of locations may include a road route covered by a vehicle on which each sensor113is mounted such as a road route taken by a garbage truck weekly. The new sensor data may include data from each sensor113mounted to a fleet of vehicles such as a plurality of garbage trucks that traverse a predefined area such as a rural route or an urban route through a municipality. For example, referring toFIG.9, a vehicle902is shown located on an area map900that shows streets, building locations, and electrical grid assets in accordance with an illustrative embodiment. Illustrative electrical grid assets include a first transmission pole904, a second transmission pole906, a third transmission pole908, a fourth transmission pole910, and a fifth transmission pole912. For example, the sensor may be mounted on vehicle902that travels a predefined path on the streets that is close to the various electrical grid assets at different times during the path traversal. For example, at the point in time shown inFIG.9, vehicle902is closest to second transmission pole906while traveling toward first transmission pole904. An intensity of an RF emission source varies based on the distance from the source. As a result, as vehicle902travels closer and closer to first transmission pole904, an intensity of an emission source located on first transmission pole904increases; while as vehicle902travels further and further away from second transmission pole906, an intensity of an emission source located on second transmission pole906decreases. The new sensor data includes the location of vehicle902as well as an emission source signal intensity measurement, such as MMV, at each point in time that a measurement was obtained. Multiple signal intensity measurements may be received at the same time from different sources some of which may not be emission sources as explained previously. In an illustrative embodiment, the new sensor data may include a serial number or other unique identifier for sensor113that may include a plurality of co-located sensors, a timestamp indicating a time at which each measurement was obtained, a latitude and a longitude at which each measurement was obtained, a signal intensity measurement value, a temperature measurement value, a humidity measurement value, an emission source flag value, sensor number indicator, etc. The timestamp may include a date and a time. Each sensor113may have a unique serial number. In an illustrative embodiment, the signal intensity measurement value may indicate energy within a predefined portion of the RF band that is separated from other energy present in the RF spectrum. The energy may be caused by high frequency transient currents that persist for a short period of time and repeat periodically due to a partial discharge from electrical grid equipment. In an illustrative embodiment, the emission source flag value indicates whether the emission is from equipment, such as an electrical grid device, that is being monitored as opposed to another type of source that is not being monitored by sensor113. In an operation202, a second indicator may be received that indicates sensor path data defined by filtering data generated from sensor113. For example, the second indicator indicates a location and a name of the sensor path data. As an example, the second indicator may be received by sensor data transformation application124after selection from a user interface window or after entry by a user into a user interface window. In an alternative embodiment, the sensor path data may not be selectable. For example, a predefined dataset may be used automatically. The sensor path data may have been captured from each sensor113over the predefined set of locations and/or the predefined area. In an illustrative embodiment, the sensor path data may include a latitude, a longitude, a signal intensity measurement value, the sensor number indicator, etc. In an operation204, the new sensor data is read from the location defined using the first indicator. In an operation206, a third indicator of a distance value d may be received. In an alternative embodiment, the third indicator may not be received. For example, a default value may be stored, for example, in computer-readable medium108and used automatically. In another alternative embodiment, the value of the distance value d may not be selectable. Instead, a fixed, predefined value may be used. For illustration, a default value for the distance value d may be d=0.1 though other values may be used. In an illustrative embodiment, the distance value d is defined in miles. The distance value d is used to filter the new sensor data to include a single sensor measurement within the distance value d to plot a route of each vehicle on which sensor113is mounted with a reduced number of data points. In an operation208, the read, new sensor data is filtered using the distance value d to select sensor measurements that are the distance value d apart. For example, a first sensor measurement is selected for each unique vehicle and each unique time period included in the new sensor data. For illustration, the new sensor data may have been captured over a most recent one-week time period and include sensor measurements taken during multiple different routes taken by one or more vehicles while the unique time period is one day so that different routes taken each day possibly by each vehicle may be identified. A second sensor measurement is selected for each unique vehicle and each unique time period that is at least the distance value d from the first sensor measurement while the intermediate measurements are skipped. A third sensor measurement is selected for each unique vehicle and each unique time period that is at least the distance value d from the second sensor measurement while the intermediate measurements are skipped, and so on until a last sensor measurement is obtained for each unique vehicle and each unique time period. In alternative embodiments, the data may not be filtered separately based on each unique vehicle. In an operation210, the data selected during the filtering of operation208are stored as sensor path data. The data may be stored in computer-readable medium108. The sensor path data may also include filtered sensor measurements from previous time periods. For example, the data selected during the filtering of operation208may be appended to data filtered from previous time periods. For illustration, data filtered from previous weeks/months/years may be stored in the sensor path data. In an operation212, a fourth indicator may be received that indicates sensor event data defined by filtering data generated from sensor113. For example, the fourth indicator indicates a location and a name of the sensor event data. As an example, the fourth indicator may be received by sensor data transformation application124after selection from a user interface window or after entry by a user into a user interface window. In an alternative embodiment, the sensor event data may not be selectable. For example, a predefined dataset may be used automatically. The sensor event data may have been captured from each sensor113over the predefined set of locations and/or the predefined area. In an illustrative embodiment, the sensor event data may include a latitude, a longitude, a signal intensity measurement value, the sensor number indicator, etc. In an operation214, a fifth indicator may be received that indicates a filtering variable associated with each sensor measurement included in the read, new sensor data. For example, the fifth indicator indicates a variable to use by name, column number, etc. In an alternative embodiment, the fifth indicator may not be received. For example, the last column in the read, new sensor data may be used automatically. In an illustrative embodiment, the filtering variable is the emission source flag value. In an operation216, a sixth indicator of a filtering value f may be received. In an alternative embodiment, the sixth indicator may not be received. For example, a default value may be stored, for example, in computer-readable medium108and used automatically. In another alternative embodiment, the value of the filtering value f may not be selectable. Instead, a fixed, predefined value may be used. For illustration, a default value for the filtering value f may be f=1 though other values may be used. The filtering value f may be used to filter the new sensor data to include sensor measurements from electrical equipment that is being monitored. In an illustrative embodiment, the filtering variable is the emission source flag value that has a value of one when the emission source is determined to be from equipment that is being monitored. In an operation218, the read, new sensor data is filtered using the filtering value f of the filtering variable indicated in operation214to select sensor measurements that are from equipment that is being monitored. For example, only sensor measurements having the emission source flag value of one may be selected from the new sensor data. In an operation220, a seventh indicator may be received that indicates a drop variable p associated with each sensor measurement included in the read, new sensor data. For example, the seventh indicator indicates a variable to use by name, column number, etc. In an alternative embodiment, the seventh indicator may not be received. For example, the last column in the read, new sensor data may be used automatically. In an illustrative embodiment, the drop variable p is the signal intensity measurement value. In an operation222, an eighth indicator of a drop threshold value T may be received. In an alternative embodiment, the eighth indicator may not be received. For example, a default value may be stored, for example, in computer-readable medium108and used automatically. In another alternative embodiment, the value of the drop threshold value T may not be selectable. Instead, a fixed, predefined value may be used. For illustration, a default value for the drop threshold value T may be T=1 though other values may be used. The drop threshold value T may be used to filter the new sensor data to include sensor measurements from electrical equipment that may be an emission source. In an operation224, the sensor data filtered in operation218is further filtered using the drop threshold value T and a drop value of the drop variable p indicated in operation220to select sensor measurements with a sufficiently high signal intensity value to indicate a possible emission source. For example, only sensor measurements having pi≥T are selected from the sensor data filtered in operation218where piindicates the drop value of the drop variable p of an ithsensor measurement. Operations218and224may be performed together to filter the new sensor data. In an operation226, a ninth indicator of a grouping distance value g may be received. In an alternative embodiment, the ninth indicator may not be received. For example, a default value may be stored, for example, in computer-readable medium108and used automatically. In another alternative embodiment, the value of the grouping distance value g may not be selectable. Instead, a fixed, predefined value may be used. For illustration, a default value for the grouping distance value g may be g=150 though other values may be used. In an illustrative embodiment, the grouping distance value g is defined in meters. The grouping distance value g is used to group the new sensor data to include sensor measurements within the grouping distance value g in a single cluster. For illustration, the grouping distance value g may be selected based on the detection range of the sensor detecting the signal intensity. In an operation228, the sensor data filtered in operation224are grouped into clusters having a diameter defined by the grouping distance value g. For example, the latitude and the longitude associated with each sensor measurement included in the sensor data filtered in operation224are converted to X, Y, Z coordinates in ECEF coordinate system so that a Euclidean distance can be used to perform the grouping. An altitude may be assumed. For example, sea level may be assumed or another predefined value may be used to compute the X, Y, Z coordinates for each sensor measurement in the ECEF coordinate system. For illustration, a FASTCLUS procedure included in SAS/STAT® 9.22 may be used to cluster the sensor data filtered in operation224into clusters such that each sensor measurement is assigned to a single cluster with each cluster having a size defined by the grouping distance value g. For example, a radius option value for the FASTCLUS procedure may be defined to have the grouping distance value g so that each cluster is separated by the grouping distance value g. A maximum number of clusters may be selected to ensure that the predefined set of locations can be completely covered based on the grouping distance value g. The FASTCLUS procedure outputs a number of clusters, a centroid location for each cluster that includes at least one sensor measurement, a list of the sensor measurements included in each cluster, etc. For illustration, operations218,224, and228may include the functions described previously for detect events process1022. In an operation230, the data grouped in operation228are stored as sensor event data. The data may be stored in computer-readable medium108. The sensor event data also includes filtered, grouped sensor measurements from previous time periods. For example, the data grouped in operation228may be appended to data grouped and filtered from previous time periods. For illustration, data filtered from previous weeks/months/years may be stored in the sensor event data. Referring toFIG.3, example operations associated with utility data transformation application122are described. Additional, fewer, or different operations may be performed depending on the embodiment of utility data transformation application122. The order of presentation of the operations ofFIG.3is not intended to be limiting. Some of the operations may not be performed in some embodiments. Although some of the operational flows are presented in sequence, the various operations may be performed in various repetitions and/or in other orders than those that are illustrated. For example, a user may execute utility data transformation application122, which causes presentation of a second user interface window, which may include a plurality of menus and selectors such as drop-down menus, buttons, text boxes, hyperlinks, etc. associated with utility data transformation application122as understood by a person of skill in the art. The plurality of menus and selectors may be accessed in various orders. The operations of utility data transformation application122further may be performed in parallel using a plurality of threads and/or a plurality of worker computing devices. The first user interface and the second interface may be the same or different user interfaces. In an operation300, a tenth indicator may be received that indicates utility data. The utility data may be stored in one or more datasets. For example, the tenth indicator indicates a location and a name of one or more datasets that store the utility data. As an example, the tenth indicator may be received by utility data transformation application122after selection from a user interface window or after entry by a user into a user interface window. In an alternative embodiment, the utility data may not be selectable. For example, a most recently created dataset may be used automatically. The utility data may be updated when it is modified. In an illustrative embodiment, the utility data may include datasets that describe electrical grid devices such as a dataset that describes transformers, a dataset that describes protective devices such as fuses, switches, reclosers, lightning arrestors, etc., a dataset that describes poles, etc. The data may be organized into one or more datasets. In general, each dataset includes a device identifier, a device type (e.g., transformer, pole, fuse, recloser, switch), a device size, a device phase, an upline device, a downline device, a last service date, a number of customers served, a latitude, a longitude, connectivity details, etc. The utility data may be transformed from utility GIS into standardized formats. In an operation302, the utility data may be combined to create an entire circuit layout that may include circuit nodes indicating wire connections between devices such as poles, transformers, overhead lines, fuses, etc. The utility data may be transformed into a standardized format. In an operation304, unique asset location data is created from the combined utility data to combine devices that may be located together, for example on a common pole. For example, a unique location identifier may be created by adjusting the longitude and latitude to 0.000000000 precision and concatenating the numerical values to character key value to create a common location identifier for devices and components in the same location. Using the unique location identifier, many devices and components can be grouped and identified using a single geographical point. In an operation306, a circuit model is created from the combined utility data. The circuit model may be created using the OPTNET procedure included with SAS/ORO15.2or the SAS NETWORK procedure. Biconnected components and articulation points may be determined using the BICONCOMP statement of the OPTNET procedure. For example, the circuit model may be created using build circuit model process1010. The circuit model may be created based on the location and connectivity between devices using the upline and/or downline device indicators and respective geodetic location. A biconnected component of a graph is a connected subgraph that cannot be broken into disconnected pieces by deleting any single node and its incident links. An articulation point is a node of a graph whose removal would cause an increase in the number of connected components. Articulating points can identify the longitude and latitude closest to a troubled electrical device to help determine the impact of an outage of each grid device. The circuit model may include a device identifier, a device type (e.g., transformer, pole, fuse, recloser, switch), a device size, a device phase, an upline device, a downline device, a last service date, a number of customers served, a latitude, a longitude, connectivity detail, articulating point indicating node for multiple connections, etc. In an operation308, the created circuit model is stored to circuit model data. In an operation310, an eleventh indicator may be received that indicates maintenance data1002. For example, the eleventh indicator indicates a location and a name of maintenance data1002that includes an outage history associated with the electrical grid being monitored. As an example, the eleventh indicator may be received by utility data transformation application122after selection from a user interface window or after entry by a user into a user interface window. In an alternative embodiment, maintenance data1002may not be selectable. For example, a most recently created dataset may be used automatically. The maintenance data1002may be updated when it is modified. In an illustrative embodiment, maintenance data1002may include maintenance and outage records relative to devices being recorded to provide a historical reference for the state of the electrical grid devices. For each device that experienced an outage, maintenance data1002may include a device identifier, a timestamp, an outage duration, a number of customers affected, an outage type code, an outage start time, an outage stop time, etc. The outage type code may indicate the cause of the outage. In an operation312, the circuit model is combined with maintenance data1002to associate an outage history with a device to create a historical reference of a device and gauge a propensity of a device to fail. In an operation314, a twelfth indicator of a clustering distance value a may be received. In an alternative embodiment, the twelfth indicator may not be received. For example, a default value may be stored, for example, in computer-readable medium108and used automatically. In another alternative embodiment, the value of the clustering distance value a may not be selectable. Instead, a fixed, predefined value may be used. For illustration, a default value for the clustering distance value a may be a=150 though other values may be used. In an illustrative embodiment, the clustering distance value a is defined in meters. The clustering distance value a is used to group the utility assets within the clustering distance value a in a single cluster. For illustration, the clustering distance value a may be selected based on the detection range of the sensor detecting the signal intensity. The clustering distance value a and the grouping distance value g may have the same value or the same parameter may be used for both. In an operation316, the unique asset location data created in operation304are grouped into clusters having a diameter defined by the clustering distance value a. For example, the latitude and the longitude associated with each grid asset included in the unique asset location data may be converted to X, Y, Z coordinates in the ECEF coordinate system so that a Euclidean distance can be used to perform the clustering. An altitude may be assumed. For illustration, the FASTCLUS procedure included in SAS/STAT® 9.22 may be used to cluster the unique asset location data created in operation304into clusters having a size defined by the clustering distance value a. For example, a radius option value for the FASTCLUS procedure may be defined to have the clustering distance value a so that each cluster is separated by the clustering distance value a. A maximum number of clusters may be selected to ensure that the predefined set of locations can be covered based on the clustering distance value a. In an operation318, a cluster centroid of each cluster defined in operation316may be stored as asset cluster centroid data. The data may be stored in computer-readable medium108. The cluster centroids may be converted from ECEF to a geodetic coordinate system for plotting on a map. The cluster centroids may include the source identifier, a latitude, and a longitude. The source identifier uniquely identifies each asset cluster. The cluster centroid may be a weighted cluster centroid using a number of assets as a weight variable based on the unique location identifier. For example, a transmission pole with three devices having the same geodetic location has a weight of four for that unique location identifier. The weighted cluster centroid enables a centroid of the cluster to be closer to a location with more grid devices, which is useful in rural areas where grid devices are further apart. In an operation320, utility assets included in each cluster are stored to asset cluster data. The asset cluster data may include the source identifier, a device identifier, a latitude, a longitude, a device type, an upline device, a downline device, a number of customers served, etc. Referring toFIG.4, example operations associated with data analysis application126are described. Additional, fewer, or different operations may be performed depending on the embodiment of data analysis application126. The order of presentation of the operations ofFIG.4is not intended to be limiting. Some of the operations may not be performed in some embodiments. Although some of the operational flows are presented in sequence, the various operations may be performed in various repetitions and/or in other orders than those that are illustrated. For example, a user may execute data analysis application126, which causes presentation of third user interface window500(shown referring toFIGS.5,6,7,8A,8A(continued),8B, and8B (continued)), which includes a plurality of menus and selectors such as drop-down menus, buttons, text boxes, hyperlinks, etc. associated with data analysis application126as understood by a person of skill in the art. The plurality of menus and selectors may be accessed in various orders. The operations of data analysis application126further may be performed in parallel using a plurality of threads and/or a plurality of worker computing devices. In an operation400, overlay map data is created by combining the sensor path data with the asset cluster data to show a path of vehicle mounted sensors within a utility's assets region to compute miles covered by the sensor path. For example, the overlay map data may be created using define map overlay process1018. In an operation402, a twelfth indicator of an asset/sensor distance value s may be received. In an alternative embodiment, the twelfth indicator may not be received. For example, a default value may be stored, for example, in computer-readable medium108and used automatically. In another alternative embodiment, the value of the asset/sensor distance value s may not be selectable. Instead, a fixed, predefined value may be used. For illustration, a default value for the asset/sensor distance value s may be s=150 though other values may be used. In an illustrative embodiment, the asset/sensor distance value s is defined in meters. The asset/sensor distance value s is used to identify grid devices that may be associated with sensor measurements. For illustration, the asset/sensor distance value s may be selected based on the detection range of the sensor detecting the signal intensity. In an operation404, unique asset location clusters are defined that include sensor events by joining the sensor data with the asset cluster data and only selecting an asset cluster within the asset/sensor distance value s of the sensor event location. For example, the distances may be calculated using the SAS GEODIST function to define a list of asset cluster centroids that are within the asset/sensor distance value s of a sensor event included in the sensor event data. For illustration, the unique asset location clusters may be defined using location matching process1024. In an operation406, the created circuit model is combined with the defined unique asset location clusters using a database join function and the unique asset location identifier to add grid device details to each sensor event to define possible emission sources. In an operation408, a thirteenth indicator of an emission source criterion may be received. In an alternative embodiment, the thirteenth indicator may not be received. For example, a default value may be stored, for example, in computer-readable medium108and used automatically. In another alternative embodiment, the value of the emission source criterion may not be selectable. Instead, a fixed, predefined value may be used. For illustration, a default value for the emission source criterion may be identification of an event in at least a predefined number of route traversals. The emission source criterion is used to identify grid devices that are probable emission sources of the sensor measurements such as the RF emissions. In an operation410, probable emission sources are identified by traversing the possible emission sources and applying the emission source criterion. For illustration, the probable emission sources may be identified using determine emission source status process1026. For example, if the data that generated the possible emission sources was captured over a four-week time period, the emission source criterion may require that a probable emission source have been identified as a possible emission source three times during the four-week time period where a route traversal was performed once per week. The identified probable emission source(s) may be provided as input devices for processing using the methods/systems described in U.S. Pat. No. 11,322,976 that issued May 3, 2022. The identified probable emission source(s) may be prioritized based on a number of customers that would be affected if the grid device were to fail triggering an outage and/or based on whether the grid device is a protective device such as a lightning arrestor as described previously for prioritize assets process1028. In an operation412, the identified probable emission sources are stored to emission source data. The identified probable emission sources may be split into two different datasets where one dataset includes all of the identified probable emission sources and a second dataset includes only the most recent identified probable emission sources. For example, the most recent time period may be based on the most recent four weeks of data. In an operation414, third user interface window500is presented under control of data analysis application126that may be integrated with utility data transformation application122and/or with sensor data transformation application124. For example, the first user interface window, the second user interface window, and third user interface window500may form a single user interface. Third user interface window500may be presented at any point in the operations of data analysis application126, utility data transformation application122, and sensor data transformation application124. Selections made using third user interface window500may trigger the data transformation and/or data analysis as understood by a person of skill in the art. The one or more output datasets128may include one or more of the sensor path data, the sensor event data, the circuit model data, the cluster centroid data, the asset cluster data, the probable emission source data, etc. Referring toFIG.5, third user interface window500is shown presenting outage history information using the circuit model combined with maintenance data1002in operation312of utility data transformation application122in accordance with an illustrative embodiment. For example, third user interface window500may be presented in the SAS Visual Analytics dashboard. The outage history information may be presented by user selection of outage history tab502. Third user interface window500may further include vehicle route tab504, point in time tab506, and probable emission source tab508. Outage history tab502shows where equipment failures have been seen historically to establish an understanding of poor-performing regions. Outage history tab502also shows a historical median time to repair for failed assets using a total number of customers affected and CMI. In an illustrative embodiment, outage history tab502may include a number of outages indicator510, a number of circuits indicator512, a number of customers affected indicator514, a CMI indicator516, a substation selector518, an outage map520, a circuit selector522, a year selector524, and a month selector526. The articulating points and connected subgraphs may be used to calculate the impact of an outage. Number of outages indicator510shows a number of the grid devices that experienced an outage during the time period included in maintenance data1002. For example, the number of assets may be computed by counting the number of unique grid devices included in maintenance data1002. Number of circuits indicator512shows a number of the grid circuits that experienced an outage during the time period included in maintenance data1002. Number of customers affected indicator514shows a number of customers that experienced an outage during the time period included in maintenance data1002. For example, the number of customers that experienced an outage during the time period may be computed from the number of customers affected included in maintenance data1002. Number of CMI indicator516shows the CMI experienced during the time period included in maintenance data1002. Substation selector518may be a drop-down selector with a list of the distinct substations identified in the utility data (gird data1000and/or maintenance data1002) and included in the circuit model. The user can select a substation from the drop-down list to trigger presentation of the circuit model that includes the grid devices connected through the substation associated with the selected substation on outage map520. Outage map520includes a map of the area that includes the grid devices included in the circuit model. The map may be created using GIS mapping software such as ArcGIS provided by ESRI headquartered in Redlands, CA, USA. Referring toFIG.5, outage map520shows grid device outage locations521. Each grid device outage location symbol of the grid device outage locations521indicates a grid device that experienced an outage using different symbols and/or colors. For simplicity, not all of the grid device outage locations are indicated using reference number521. The distinct colors may be used to indicate the substation to which the grid devices are connected such that a common color indicates a common substation. Outage map520may be zoomed in or out, panned up, down, to the left or right, etc. Circuit selector522may be a drop-down selector with a list of the distinct circuits identified in the utility data and included in the circuit model. The user can select a circuit from the drop-down list to trigger presentation of the circuit model that includes the grid devices connected by the circuit associated with the selected circuit on outage map520. Year selector524includes a list of years from which the user can select. The selected year acts as a filter to modify the grid devices presented in outage map520. Month selector526includes a list of months from which the user can select. The selected month acts as a filter to modify the grid devices presented in outage map520. Each month may be associated with a most recent month in the selected year. Referring toFIG.6, vehicle route tab504of third user interface window500is shown presenting the overlay map data generated by data analysis application126in operation400in accordance with an illustrative embodiment. Vehicle route tab504shows how well the circuit is covered by the sensor(s) mounted on vehicles. Vehicle route tab504may include outage map520, a coverage indicator604, a year selector606, a month selector608, a week selector610, a substation selector612, and a circuit selector614. Outage map520shows vehicle routes600generated using the sensor path data overlaid on grid device clusters602generated using the cluster centroid data. Coverage indicator604shows the number of miles traveled by the vehicles as defined by vehicle routes600. Year selector606includes a list of years from which the user can select. The year selected using year selector606acts as a filter to modify the vehicle routes600presented in outage map520to the selected year. Month selector608includes a list of months from which the user can select. The month selected using month selector608acts as a filter to modify the vehicle routes600presented in outage map520to the selected month of the selected year. Week selector610includes a list of weeks from which the user can select. The week selected using week selector610acts as a filter to modify the vehicle routes600presented in outage map520to the selected week of the selected month and year. Substation selector612includes a list of substations from which the user can select. The substation selected using substation selector612acts as a filter to modify the grid device clusters602presented in outage map520to the selected substation. Circuit selector614includes a list of circuits from which the user can select. The circuit selected using circuit selector614acts as a filter to modify the grid device clusters602presented in outage map520to the selected circuit. Referring toFIG.7, point in time tab506of third user interface window500is shown presenting all of the probable emission source data generated by data analysis application126in operation412in accordance with an illustrative embodiment. Point in time tab506shows potential issues over time and includes the ability to look back in time to assess the health of the grid. Point in time tab506can also be used to show maintenance activities that have been resolved by looking at a time period before the maintenance activity took place and a time period after. Point in time tab506can also be used to show how an unexpected event such as a strong thunderstorm impacted the circuit assets. Some of these can be transient effects from the storm that disappear in a few days. Point in time tab506includes outage map520, a number of probable emission sources indicator700, a start date selector702, a stop date selector704, a time interval selector708, a substation selector710, and a circuit selector712. Outage map520shows probable emission source locations706using symbols in the illustrative embodiment. The symbols may be color coded to indicate an associated substation. Number of probable emission sources indicator700indicates a number of probable emission sources based on the selections using start date selector702, stop date selector704, time interval selector708, substation selector710, and circuit selector712. Start date selector702may be selected and dragged along a timeline714to change the start date for selecting the probable emission sources shown with a symbol in outage map520and included in number of probable emission sources indicator700. Start date selector702may be defined initially based on an earliest timestamp in the probable emission source data. Stop date selector704may be selected and dragged along timeline714to change the stop date for selecting the probable emission sources shown with a symbol in outage map520and included in number of probable emission sources indicator700. Stop date selector704may be defined initially based on a last timestamp in the probable emission source data. Time interval selector708includes a list of time intervals from which the user can select. The time interval selected using time interval selector708acts as a filter to modify the possible emission sources presented in outage map520to the selected time interval. Illustrative time intervals may include a most recent week, a most recent month, a most recent year, etc. Substation selector710includes a list of substations from which the user can select. The substation selected using substation selector710acts as a filter to modify the possible emission sources presented in outage map520to the selected substation. Circuit selector712includes a list of circuits from which the user can select. The circuit selected using circuit selector712acts as a filter to modify the possible emission sources presented in outage map520to the selected circuit. Referring toFIGS.8A,8A(continued),8B, and8B (continued), probable emission source tab508of third user interface window500is shown presenting current probable emission source data generated by data analysis application126in operation412in accordance with an illustrative embodiment. Probable emission source tab508shows a prioritized asset event listing of current issues to be addressed and can be used to prioritize a dispatch of workers to resolve the issues prior to any power interruption. Probable emission source tab508includes outage map520, a number of probable emission sources indicator800, a number of affected circuits indicator802, a number of possible customers affected indicator804, a CMI indicator806, an emission source table808, a substation selector810, and a circuit selector812. Outage map520shows probable emission source locations814. Number of probable emission sources indicator800indicates a number of probable emission sources based on the selections using substation selector810and circuit selector812. Number of affected circuits indicator802indicates a number of circuits that would be affected if the probable emission source(s) experience a failure leading to an outage. Number of possible customers interrupted indicator804indicates a number of customers that would be affected if the probable emission source(s) experience a failure leading to an outage. CMI806indicates an estimated CMI if the probable emission source(s) experience a failure leading to an outage. Emission source table808(shown referring toFIG.8A(continued)) includes a row that describes each grid device associated with each probable emission source. In the illustrative embodiment, columns of emission source table808include the source identifier (Group ID) that shows the asset cluster number for the grid device as defined in the asset cluster data of operation320, a substation identifier to which the grid device is connected, a circuit to which the grid device is connected, the number of affected customers if the assets associated with the source identifier fail (CI), and the CMI if the assets associated with the source identifier fail. Referring toFIGS.8B and8B(continued), an asset event list table816may be selected instead of emission source table808. Asset event list table816includes a row that describes each grid device associated with each probable emission source. In the illustrative embodiment, columns of probable emission source table808include the source identifier (Group ID), the substation identifier, a circuit, a device identifier (Asset ID), a grid device type (Asset Type), the number of affected customers if the assets associated with the source identifier fail (CI), the CMI, and a grid device description (Asset Description). The rows of emission source table808and asset event list table816may be prioritized based on the number of affected customers and/or the CMI if the grid device failed and/or on the grid device type. For example, grid device types that are protective devices may be prioritized higher in probable emission source table808and asset event list table816regardless of the number of affected customers and/or the CMI. The word “illustrative” is used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “illustrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs. Further, for the purposes of this disclosure and unless otherwise specified, “a” or “an” means “one or more”. Still further, using “and” or “or” in the detailed description is intended to include “and/or” unless specifically indicated otherwise. The illustrative embodiments may be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer to implement the disclosed embodiments. The foregoing description of illustrative embodiments of the disclosed subject matter has been presented for purposes of illustration and of description. It is not intended to be exhaustive or to limit the disclosed subject matter to the precise form disclosed, and modifications and variations are possible in light of the above teachings or may be acquired from practice of the disclosed subject matter. The embodiments were chosen and described in order to explain the principles of the disclosed subject matter and as practical applications of the disclosed subject matter to enable one skilled in the art to utilize the disclosed subject matter in various embodiments and with various modifications as suited to the particular use contemplated.
79,534
11860213
DETAILED DESCRIPTION In the context of the following description, an electrical profile or simply “profile” is defined as a time series of an electrical input, staggered or spread over time. This electrical input can be a measure of an active energy consumption, an apparent energy consumption, a reactive energy consumption, a voltage, a current or any other variable of an electrical nature. The electrical profiles from single-phase and two-phase installations include at least voltage (in V) and active energy (in kWh) profiles while for multi-phase electrical installations, the collected profiles include total active energy (in kWh) and by electrical phase (GA, 4, (pc), a measurement of total apparent energy (in kVAh) and by electrical phase, a measurement of reactive energy (in kVARh) total and by electrical phase, a measurement of voltages (in V) per phase and a measurement of the currents (in A) per phase. In the context of the following description, the profiles used are produced by a meter. A meter is an electrical measurement component integrated into an advanced metering infrastructure that produces, among other things, electrical profiles from an electrical installation connected to a low voltage network (for example, a network where the nominal voltage between phases does not exceed 750V) or medium voltage network (for example, where the nominal voltage between the phases is more than 750 V and less than 44,000 V). These meters, whose main function is energy measurement for billing purposes, are sometimes referred to as electric meters, smart meters, communicating meters, or next generation meters (NGMs). In the context of the following description, the profiles produced by the meters are processed by IT tools, including applications and algorithms, making it possible to identify electrical installations likely to exhibit an electrical non-conformity (ENC). The term “IT tools” is understood to mean computing devices, such as computers and/or servers, databases and software applications capable of applying algorithmic processing to the electrical profiles. Computers and/or servers include one or more algorithmic processing units, including one or more processor(s) and one or more data storage (memory). The term “computing device” encompasses computers, servers and/or specialized electronic devices which receive, process and/or transmit data. “Processing devices” are generally part of “systems” and include processing means, such as microcontrollers and/or microprocessors, CPUs or are implemented on FPGAs, as examples only. The processing means are used in combination with storage medium, also referred to as “memory” or “storage means”. Storage medium can store instructions, algorithms, rules and/or trading data to be processed. Storage medium encompasses volatile or non-volatile/persistent memory, such as registers, cache, RAM, flash memory, ROM, as examples only. The type of memory is of course chosen according to the desired use, whether it should retain instructions, or temporarily store, retain or update data. Steps of the proposed method are implemented as software instructions and algorithms, stored in computer memory and executed by processors. It should be understood that servers and computers are required to implement to proposed system, and to execute the proposed method. The IT tools can be centralized or distributed. The term “ENC” is understood to mean anomalies related to the electrical position of the meters, deviations from the operating standards established by the electrical utilities and anomalies associated with the measurement process. The latter category is called “non-technical losses” and includes energy theft. These ENCs can be associated with the electrical installations of customers whose nature of the connection can be single-phase or multi-phase. In the context of the following description, the term “customer” is understood to mean each of the users connected to the low-voltage or medium-voltage electrical network. This connection is made via an electrical installation. An electrical installation is understood to mean the electrical components required to supply a customer's electrical loads. Without being limited to the scope of the present invention, most installations have at least one meter adapted to the nature and magnitude of the load, and one or more distribution panels also adapted to the nature and magnitude of the load. The electrical panels allow the distribution of electricity to the customer's various electrical equipment. Most of the existing solutions for detecting the presence of an ENCs, which may be indicative of energy theft, involve the addition of sub-metering equipment to the electrical distribution network. This sub-metering equipment, such as meters or current sensors, is installed upstream of customers' electrical installations and makes it possible to make energy or current balances in an electrical cell or at a current node (Kirchhoff s law). The sub-metering infrastructure thus added, in addition to the existing meters associated with customers' electrical installations (NGM meters described above), involves substantial costs, linked to its acquisition, deployment and maintenance. The invention described in the following paragraphs relates to a method, a system and a tangible computer program product for the identification or detection of electrical installations likely to exhibit an ENC, without resorting to the addition of sub-metering equipment. The proposed system and method differ from existing solutions in that they only use electric profiles generated by meters associated with customers' electrical installations and from the IT tools developed. The data transmitted by the meters and retrieved in the form of profiles are conditioned in order to apply different algorithmic processing, each of these processing being linked to a given ENC indicator. The values generated by the indicators also make it possible to specify the nature and importance of an ENC. The results of the different algorithmic processing are compared to target conditions, varying from one indicator to another. ENC indicators can take different values, such as false or true, a percentage, a ratio, a score, etc. Electrical installations likely to exhibit an ENC are identified using indicators that have met or fulfilled their target conditions. Some of the indicators are specialized for single-phase electrical installations, while others are used for an ENC detection in multi-phase electrical installations. Indicators can also be autonomous or relational in nature. An ENC indicator is considered to be “autonomous” if its algorithmic processing only involves data from the profiles of the installation analyzed. In the event that the algorithmic processing of an ENC indicator requires data from profiles of electrically neighboring facilities, that indicator is considered to be “relational”. Electrically neighboring installations, hereby referred to as “neighboring installations”, are understood to mean all the installations which are connected to the same distribution transformer, or to the same electric phase or to the same electric line or even to the same distribution station. FIG.1shows the various components necessary for carrying out the method allowing the identification of electrical installations likely to exhibit an ENC, including the components, at an early stage in the process. It shows a simplified electrical distribution network (100), which includes a plurality of single-phase electrical installations (110), and multi-phase electrical installations (112). Although few electrical installations are shown inFIG.1, it should be noted that an electrical distribution network can have several thousand or even several million electrical installations. The electrical installations are connected to transformers (116), which themselves are connected to electrical lines or arteries of the distribution network (100). The latter converge towards distribution stations, not shown inFIG.1. Each electrical installation (110,112and120) is connected to a distribution transformer (116). Each meter (120) comprises measuring means and data transmission means. The measurements taken by the meter (data and profiles) are thus routed to a data management system (170), called “MDMS”, short for Meter Data Management System. Each meter also includes control means for interrupting the power supply to the electrical installation to which it is linked. These means can be activated by sending a signal from the central monitoring and management system (request to open a control element located in the meter), to the meter. Thus, it is possible, using the tools developed for the present invention, to interrupt the supply of electricity by sending a request to open a control element located in the meter linked to the electrical installation determined to be non-conforming. The MDMS (170) includes a database (172) to store the raw data transmitted by the meters. The MDMS (170) and the database (172) can be located on one or more servers, located in the same building, or can be distributed, between several servers, in different locations, for example in a cloud data infrastructure. As shown inFIG.1, the meters do not communicate directly with the MDMS. The meters can relay information between themselves or send it directly to a router (115). The routers communicate with collectors (130), which in turn transmit the information to the MDMS (170) via a Wide Area Network (WAN) (140). The data taken by the meters is then routed to a front-end data acquisition system (160) and then to the MDMS (170). A firewall security system (150) is used to protect the meter data. Of course, other network configurations can also be considered. The implementation of the method of identifying electrical installations likely to exhibit an ENC is carried out using computing devices and specifically designed IT tools, including a specialized software application. This application is deployed in a computer system (180) which may include an algorithmic processing unit, including one or more processors and a central or distributed storage memory. The system (180) may also include one or more servers and a database (182). The latter is used to store, among other things, electrical profiles from the MDMS, distribution network topology data (from the Geographic Information System (GIS) of the electrical utility), nominative data related to a meter and an electrical installation (also called “customer vectors”) and meteorological data indicative of local weather conditions. The database can also store other information described in more detail below, including, for example, calculated ENC indicators and unique identifiers associated with electrical installations. FIG.2shows the global process in which the method of automatic detection of the electrical installations likely to exhibit an ENC and this via the application of various algorithmic processing associated with the various indicators of an ENC. This process is segmented into different steps. The first step consists (200) in the recovery of the electrical profiles. This step involves retrieving, from the database associated with the meter data management system, the electrical profiles associated with the electrical installations. This step can also include the extraction of additional data, including for example nominative data, the topology of the electrical distribution network, the meteorological data, and other data used for the application of the algorithmic processing. According to a preferred embodiment, it is possible to select the extent of the processing in both electrical and temporal terms. From an electrical point of view, the processing can be carried out at the level of the transformer, a phase, a line or a distribution station. From a temporal point of view, the processing period can vary from a few hours to several days or even a few months depending on the level of precision and the type of information required. The second step (210), which is optional, allows receiving a selection of the indicators to be calculated as well as the level of the different thresholds to be applied for the target conditions. This step is optional since all indicators can be applied by default. An ENC indicator is the result of an algorithmic processing applied on at least one electrical profile and which can be compared to a target condition in order to identify an ENC. According to a preferred embodiment, it is possible for the system to receive an indicator selection, via a specialized application, of only some (a subset) of the ENC indicators to be applied to the analyzed profiles, and this according to the nature of the electrical installation of the customer or according to the type of desired research. The specialized application also makes it possible to modify the target conditions, by adjusting the default values of the different thresholds and constraints (voltage, current, period of time, number of occurrences, etc.) thus allowing to manage the behavior and sensitivity of the algorithmic processing associated with the indicators. The adjustment of thresholds can for example be carried out following field inspections, confirming or denying non-conformities. To increase or decrease the sensitivity of certain indicators, thresholds can be adjusted retroactively, depending on the inspection results. It may also be possible to adjust thresholds by region or distribution station. The third step (220) makes it possible to apply, to the profiles of each of the installations included in the selected electrical range (200), the algorithmic processing operations specific to each of the selected indicators (210). For each of the calculated indicators, a verification is carried out in relation to the target conditions. When at least one of the indicators meets its target conditions, the electrical installation from which the profiles under study originate is deemed likely to exhibit an ENC and an entry is added to the results file or to a database. The fourth step (230) consists of an analysis of the indicators that have fulfilled their target conditions in order to identify the electrical installations that are likely to exhibit an ENC. Depending on the number of indicators, their occurrence and type, a degree of certainty, or likelihood, that an ENC is exhibited can be assessed. According to a preferred embodiment and in certain specific cases, the analysis can lead directly to the interruption of the power supply to the installation (260) as long as the degree of certainty of the presence of an ENC is fairly high. Otherwise, a detailed analysis of the data of the electrical installation can be carried out in order to confirm or deny the potential ENC. The identification of an electrical installation likely to exhibit an ENC is not based solely on the detection of a single indicator observed at a specific time, but rather on a set of indicators and/or a certain recurrence of indicators. When the potential ENC is maintained, an inspection request is automatically issued (240) and an inspection (250) of the electrical installations is carried out. Finally, for the cases of an ENC validated by an inspection, a restoration of conformity of the installations is carried out (270), preceded or not by an interruption of the power supply (260) and this according to the result of the inspection and the nature of the ENC. FIG.3illustrates the flow of data and information associated with the process of identifying electrical installations likely to exhibit an ENC, according to a preferred embodiment. Although the inputs and outputs are illustrated in the form of files (csv, txt, docx or jpg), they can take various other forms, for example, those coming from or supplying a database. According to a preferred embodiment, the system receives a selection, via the user interface of a specialized application (300), of all the processing control parameters. Without being limited to the scope of the present invention, these parameters can correspond, among others, to the electrical and temporal extent of the processing to be applied, the indicators to be calculated and the thresholds and constraints associated with each of them. The electrical profiles (310) correspond to the electrical measurements generated by the meters. The system can prioritize the profiles by adding the topology of the electrical distribution network and by cross-referencing the profile data with the network topology data from the GIS of the electrical utility. Using the network topology, the system can associate the different galvanic links connecting a customer's electrical installations to the distribution station i.e. transformer, phase, artery. Some galvanic links can be questioned through the calculation of positioning indicators. Nominative data, sometimes referred to as “customer vectors”, (311) contains data to characterize customers' electrical installations. Customer vectors include at least one of the following information: the nature of the electrical installation (single-phase or multi-phase); the billing to which the electrical installation is subject; the building's use (residential, commercial, institutional or industrial) or the main source of energy used for heating the building(s). This nominative data is recovered by the system, from the database182, and used for the selection of algorithmic processing to be applied to the electrical profiles of the installation or for the validation of potential ENCs. Algorithmic processing can then cross-reference the ENC indicators that have met their target conditions with the nominative data to confirm or deny that the electrical installations identified in step c) are likely to exhibit an ENC. For example, it is possible that an electrical installation may consume very little energy, even during the winter, if the customer's heating type is wood or gas, in comparison to neighboring installations that use electric heating. Cross-referencing the type of heating (or main source of energy) for a given electrical installation, as provided in the nominative data, with the electrical profile allows the system to confirm or deny whether the installation is likely to exhibit an ENC. The meteorological data (312) includes at least a local outdoor temperature profile corresponding to the study period (date and time to specify the temporal range). The meteorological data can be recovered by the system, and used in the algorithmic processing associated with certain indicators or can be used to confirm or deny a potential ENC. Again, a consumption peak for facilities in a given region can be explained by a period of extreme cold. Thus, an ENC detected in step c) can be validated with additional data (nominative and meteorological). At the end of the processing, according to a preferred embodiment, the results are compiled into a data structure or structure of results, also known as a “cube” (330). The data structure includes, at a minimum, for each installation likely to exhibit an ENC, the list of indicators that have met their target conditions, the value of the indicators, and one or many unique identifiers to distinguish between installations on an electrical distribution system. The unique identifier may include, for example, the street address or serial number of the meter associated with the installation. Some information from the nominative data (311) can also be added to the structure of results to facilitate the production of inspection requests. The structure of the results can also include figures or graphs making it possible to show the ENC indicators that have fulfilled the target conditions, such as those shown inFIGS.5,6and7. The content of the structure can be exported in different file formats or saved directly in a database. The results can be grouped by lines, distribution stations or regions, depending on the electrical extent of the processing. In the event that no analysis or additional information is available to deny a potential ENC associated with a customer's electrical installation, an inspection request (350) for the said electrical installation is automatically generated by the system, using the specialized software application. The use of a pro-formatted template (320) allows the system to automatically generate an inspection request. This request contains the information required for the inspection, i.e. minimum information, the customer's personal information, the nature of the suspected ENC and the inspection priority level automatically assigned. The priority can be determined by the system according to the degree of certainty as to the existence of the ENC(s) for the said installation. A figure or a graph (340) illustrating the circumstance having led to the identification of the potential ENC can also be generated by the system and added to the inspection request. It is the inspection of an installation that will provide the final and unequivocal confirmation of the presence of an ENC. Based on the inspection results obtained, feedback of the default values of thresholds and constraints of the ENC indicators can be determined in order to increase the overall performance of the detection method. This performance, expressed as a percentage of likelihood or degree of certainty, is defined as the ratio of the number of confirmed ENC cases to the total number of ENC cases that have been inspected or power supplies interrupted. FIG.4shows a classification of the different ENC indicators that have been created, according to a preferred embodiment of the proposed method (400). The first level of classification makes it possible to distinguish between ENC indicators applied to single-phase (410) and multi-phase (450) electrical installations. Under these levels, three classes of indicators can be defined: a first class of indicators called “electrical positioning indicators” (420); a second class of indicators called “state indicators” (430and460); and a third class of indicators called “non-technical loss indicators” (440and470). This last class groups together anomalies that affect the measurement of electrical energy and includes several subclasses of indicators. The “electrical positioning indicators” (420), specific to single-phase installations, allows the system, through statistical and electrical analysis, to confirm or deny the accuracy of the galvanic link that connects a customer's electrical installation (120inFIG.1) to its distribution transformer (116inFIG.1) and its belonging to the line being analyzed. As long as the galvanic link is validated, no particular attribution is made to the electrical installation. Otherwise, if the positioning indicators show that the customer's installation still appears to belong to the power line under analysis, then the installation is given the characteristic of “installation or customer incorrectly located”. If, on the contrary, the indicators show that the customer's installation does not appear to belong to the power line being analyzed, then the installation is given the status of “OUT”, i.e. installation does not belong to the line being analyzed. The method thus also include a step to validate, as explained above, from the calculated indicative positioning data, a probability that the electrical installations identified in step c) are non-conforming installations. In this example, if the positioning indicator is set to “out”, than the likelihood that the electrical installation is non-confirming is low, since the positioning of the electrical installation is simply mislocated, but not necessarily non-conforming. “State indicators” (430and460) are indicators that provide a better understanding of the electrical operating conditions, or operating-mode, of a customer's installation. These indicators, taken individually or as a whole, are calculated by the one or more processing devices of the system to confirm or deny the existence of a potential ENC. The presence of a state indicator in the structure of results can also lead, directly and after analysis, to the existence of an ENC. For example, a status indicator can highlight a non-standard lack of data in the voltage or consumption profiles of a single-phase installation or show a voltage or current imbalance in a multi-phase installation. State indicators may include one or more of the following indicators: energy data capture rate; voltage data capture rate; voltage de-balancing; current de-balancing; apparent and active energy ratio. As with the positioning indicators, a probability that the electrical installations identified in step c) are non-conforming installations can be validated by the system from the calculated state indicator data. The confirming can be conducted by the system by comparing state indicator values with standard threshold values, and confirm the non-conformity of an electrical installation more than X number of indicators exceed their corresponding thresholds. “Non-technical loss indicators” (440and470) are indicators that reveal, using the execution of algorithms, potential electrical anomalies that affect the measurement of the electrical energy consumed. This class can be subdivided into subclasses. In the representation inFIG.4, six (6) subclasses have been established. It is specified that other subclasses of ENC indicators can be defined. The various subclasses of technical loss indicators include: the detection of meter tampering or of a defective meter (441and471); the detection of anomalies by comparing electrical profiles (442and472); the detection of meter connections or inadequate meter components (443and473); the detection of transient aberrations in electrical profiles (444and474); the detection of a non-standard way of operating or operating-mode (445and475); and the detection of non-conforming cyclic electrical loads (446and476). The following paragraphs describe in more detail the different subclasses of non-technical loss indicators. It is important note that an indicator may be found in one or more subclasses. This is the case, for example, of the indicator that detects negative values in active consumption profiles. The existence of negative values can be attributed to the subclass “detection of meter tampering or a faulty meter”, while also being part of the subclass of indicators revealing a non-standard operation. Concerning a first subclass “detection of a meter manipulation or a faulty meter” (441and471), this subclass groups together all the indicators whose result of the algorithmic processing can be explained by a meter manipulation or a malfunction of the meter. For example, the system uses one of the autonomous indicators in this subclass to analyze the voltage profile of a single-phase installation. If this profile shows an average voltage in the order of 50% of the nominal voltage, the system is configured to detect that either the meter is faulty or that a manipulation of the voltage coil connections has been intentionally made. On the other hand, if the voltage level is variable over time and therefore arbitrary, the system is configured to determine that a malfunction of the meter can be suspected. Without being limited to the scope of the present invention, indicators in this subclass may be calculated by the one or more processing devices and include: alteration of the voltage coil; identical data in energy; identical voltage data; resistance in series on current transformer; and zero three-phase current with non-zero consumption. As per the example provided above, the one or more computing devices of the system can confirm, using at least one of the meter-tampering indicators, the likelihood that the electrical installations identified in step c) are non-conforming installations. The second subclass called “detection of anomalies by comparison to the electrical profiles” (442and472) comprises ENC indicators of the “relational” type for single-phase installations and of the “relational” or “autonomous” type for multi-phase electrical installations. In general, in this subclass, the algorithmic processing of indicators aims to identify the differences between the various profiles coming from electrically neighboring installations. For example, one of the single-phase relational indicators of this subclass analyzes the average voltage profiles of electrical installations over a certain period of time. In the event that the maximum difference between the average voltage levels of the installations is greater than a certain threshold and no electrical parameter justifies it, a potential ENC will be assigned to the installation being analyzed. Another example is an indicator, called a current ratio, which analyzes the current profiles of each of the supply phases of a multi-phase installation. This indicator can be used to identify multi-phase meters where at least one of the current profiles has a different average level from the other meters, while having an almost identical profile. As shown inFIG.5, this indicator uses, in its algorithmic processing, the statistical notions of slopes (mAB, MBC, mcA) and determination factors (R2AB, R2Bc, R2cA) applied to the couples of the different current profile values. When the value of the determination factor is close to unity and the slope is outside the thresholds, a potential ENC is assigned to the installation being analyzed. The three graphs at the top ofFIG.5show a case of a three-phase electrical installation for which the current profiles are similar for phases A, B and C. However, these same graphs also show a lower current level on phase C. These graphs illustrate that there may be potential ENC in the metering components of the electrical installation.FIG.5also shows an example of a graph that can be generated automatically (bottom image) showing the different statistical values used. Without being limited to the scope of the present invention, indicators for this second subclass may include: night-time consumption; voltage deviation of single-phase average values; voltage deviation of inter-phase average values; voltage deviation of inter-customer average values; voltage deviation of average values under a multi-phase transformer; voltage level at zero consumption; current ratio; and unsynchronized voltage loss and return. The one or more processing devices, part of system180, can calculate at least one abnormality indicator listed above, by comparison. The system can validate, based on the at least one abnormality indicator, the likelihood that the electrical installations identified in step c) are non-conforming installations. The processing devices of the system can also be configured to calculate indicators of meter connection or unsuitable components. These indicators are part of a third subclass called “detection of inadequate meter connections or components” (443and473) is used to detect electrical installations with an inadequate electrical connection to the meter (single-phase installation) or to a component of the metering installation (multi-phase installation). An example of an indicator found in this subclass is one that aims to detect, under specific conditions, negative values in the consumption profiles of installations. The existence of these values in the consumption profiles is, with a high probability, due either to an inversion or half inversion of the meter connections (single-phase installation) or to an inversion of the metering sub-components (multi-phase installation). It should be noted here that under certain conditions, it is possible that negative energy values may be found in the consumption profiles of the electrical installations of electricity producers, transporters or distributors. For this reason, the latter are excluded from this type of analysis.FIG.6shows the power consumption profiles of some single-phase customer installations under the same distribution transformer. One of these installations shows negative values over the entire analysis period, which is impossible and therefore indicative of an anomaly. Other indicators of this subclass include: an absence of current, an absence of voltage, or zero voltage with current. The likelihood that an electrical installations is not conform can be validated or confirmed based on at least one of the indicators of meter connection or unsuitable components A fourth subclass called “detection of transient aberrations in electrical profiles” (444and474) includes indicators of a sudden and momentary change in an electrical variable that cannot be explained by other local or nearby electrical variables or by customer vector information. This subclass includes the indicator for the identification of large voltage variations. For example, algorithmic processing related to large voltage variations may include steps to calculate voltage variations, for a given profile, between two consecutive measurement periods; to retain variations that are above or below predetermined thresholds, to calculate the estimated energy required for these variations, and then to compare them with the measured energy values for these periods. A “large voltage deviation” non-conformance is detected for installations with voltage variations that do not correspond to the energy demand that should be associated with them. A fifth subclass is called “detection of a non-standard operating-mode” (445and475), in which the indicators identify, in the electrical profiles, operating conditions deemed to be outside the operating standards specific to each electrical utility. For example, the algorithmic processing associated with the non-standard voltage indicators may include steps to calculate the average profile voltage for profiles with a non-zero current value; and compare the average profile voltage to predetermined minimum or maximum voltages.FIG.7provides a graphical example of a single-phase facility that, at peak, consumes more than 14 kWh/15 minutes, while the facility has a maximum capacity of 12 kWh/15 minutes at 100% of its payload (active energy consumption indicator). This subclass of indicators also makes it possible to identify average values, in voltage profiles, that exceed the values under marginal operating conditions defined by power system operators. Indicators in this category may include a dual-energy heating indicator; non-standard peak power; non-standard average voltage; single-connection transformer indicator; disparate determination factor; validation of active energy consumption; or non-standard voltage with current. The one or more computing devices can validate, on the basis of the at least one non-standard operating-mode indicator listed above, the likelihood that the electrical installations identified are non-conforming installations. Finally, a sixth subclass called “detection of non-conforming cyclic electrical charges” (446and476) includes indicators that identify, through the analysis of voltage and energy profiles, the presence of non-conforming cyclic charges. By non-conforming cyclic loads, it is understood to mean all cyclic loads that are not correctly measured by the meter or its components, through an alteration of the latter or their environment. Without being limited to the scope of the present invention, the algorithmic processing related to this type of indicators may, among others, include the calculation of the Fast Fourier Transform (FFT), the calculation of correlation and autocorrelation of profiles, the calculation of certain occurrences, the analysis and processing of voltage and energy profiles. The algorithmic processing of the indicators of this subclass is carried out for periods considered optimal for the specific search of an ENC on the distribution network. Regardless of the classification, it is important to note that the application and management of the algorithmic processing proposed by this method (400) are complex, given the large volume of profiles to be processed and the number of indicators to be calculated. Data from hundreds of thousands, or even a few million electrical installations are analyzed. Obviously, this processing cannot be performed manually. A specialized software application, consisting of instructions that can be executed by one or more processors, including one or more ALUs (Arithmetic Logic Units), is essential for the realization of the proposed method. In summary, the method (400) and the system described above, which includes a tangible and non-transitory product of a computer program (software application), make it possible to identify the electrical installations likely to exhibit an ENC. As described above, an indicator is the result of the execution of an algorithmic processing applied to electrical and thermal profiles (meteorological database). The estimation of certain indicators and the validation of the existence of certain ENCs are also made possible through the use of an additional database containing nominative data (or customer vectors). As outputs, the specialized application allows the identification of electrical installations requiring a field inspection or, depending on the degree of certainty of the ENC, an automatic interruption of the power supply to an installation. The proposed method and system does not require any other components to be installed on the distribution system. This innovative feature significantly reduces the costs of deployment (acquisition) and use (replacement and maintenance) of the detection method compared to existing methods. The method and the system also make it possible to process large quantities of profiles, associated with a plurality of electrical installations in an automated way, with little or no human intervention. The proposed method and system automates the process of detection and identification of electrical installations likely to exhibit ENC, from the collection of profiles, the selection of indicators to be applied, the associated algorithmic calculations, the identification of electrical installations, and up to the automatic interruption (if necessary) of power to electrical installations confirmed as non-conforming. Although concepts, data flows and methods associated with the invention and results have been illustrated in the attached drawings and described above, it will be apparent to people skilled in the art that modifications can be made to these achievements without departing from the invention.
38,668
11860214
DETAILED DESCRIPTION FIG.1shows a diagram of a system for monitoring the state of a cable C according to one embodiment of the invention. The system comprises a plurality of reflectometry sensors or devices M1, M2, M3, . . . Mn-1, Mnplaced along the cable C at chosen points that thus bound cable segments S1, S2, . . . Sn-1. Each reflectometry device is configured to perform two separate functions: a first function injecting a test signal into the cable C and a second function measuring a signal propagating through the cable C. To this end, each reflectometry device comprises means for generating a test signal, for example a signal generator or a memory in which a digital signal is stored. The signal may be analog or digital. In the case where the signal is digital, the device also comprises a digital-analog converter. Each device also comprises a coupler for injecting the test signal into the cable C. Advantageously, the coupler also has the function of capturing a signal propagating through the cable. The coupler may be achieved via physical contact or via capacitive or inductive contactless coupling. The captured signal is optionally digitized via an analog-to-digital converter then transmitted, via a communication network RC, to a post-processing unit PTR that is responsible for analyzing the signal. The type of test signal used may be a pulsed signal, for example a square-wave or a Gaussian pulse, or a more complex signal, for example a multi-carrier OMTDR signal (OMTDR being the acronym of orthogonal multi-tone time-domain reflectometry). The type of signal, the power of the signal injected into the cable, its frequency and its sampling frequency may be parameterized depending on the nature of the cable to be monitored, and especially on the attenuation characteristics of the cable. These parameters also depend on the nature of the coupler used and on the precision desired for the measurement of the signal. The distance between two devices M1,M2especially depends on the attenuation and dispersion of the cable, and on the level of precision desired for the measurements. The distance between two devices M1,M2is especially chosen so as to limit to a threshold value the level of attenuation of the signal when it makes the trip between two neighboring devices M1,M2. The threshold value is chosen, for example, so as to respect a minimum signal-to-noise ratio computed beforehand to respect a chosen link budget. Thus, positioning a plurality of devices along the cable C allows each cable segment to be monitored in a manner independent of the effect of signal attenuation. The communication network RC may be achieved by any means allowing the signal measured by each device M1, M2, M3, . . . Mn-1, Mnto be transmitted to a remote post-processing unit PTR. For example, the communication network RC is a wired network, based on optical fiber or another type of communication cable, or even a wireless network. In the case of a wireless network, each device M1, M2, M3, . . . Mn-1, Mnis equipped with a transmitter able to transmit data to the post-processing unit PTR and with a receiver able to receive control information transmitted by a control unit CTRL. The function of the control unit CTRL is to implement a test procedure by controlling the various devices placed along the cable. In particular, the control unit CTRL transmits commands to each device so as to activate or deactivate injection of a test signal into the cable and measurement of this signal after it has propagated back through the cable to the point of injection (which is also a measurement point). The control unit CTRL is responsible for managing the sequence of activation of the various devices depending on the propagation time of the signal. With reference toFIG.1, an example of a test procedure employed to successively monitor the state of the segment S1then the state of the segment S2of cable C will now be described. Initially, all devices are deactivated, i.e. the signal-injection function and the signal-measurement function are deactivated. Generally, the test procedure consists in implementing the following successive steps:monitoring a first segment S1for a monitoring first duration,then waiting a delay second duration, thenmonitoring a second segment S2for the same monitoring first duration.These steps are iterated for all the segments or a set of chosen segments. Without loss of generality, the various segments S1,S2,Sn-1may be of the same length or of different lengths. In the second case, the duration for which each segment is monitored may be adapted to the respective lengths of the segments and therefore be different. To simplify the implementation of the system, a common monitoring duration may be chosen by selecting the longest monitoring duration, i.e. the one corresponding to the segment of largest length. To monitor the first segment S1, the control unit CTRL transmits, to the first device M1, an activation command. The activation command transmitted to the device activates the injection of a test signal and activates the acquisition of a measurement signal. There are two possible scenarios. If the cable segment S1is fault-free, the signal is either reflected from the impedance discontinuity caused by the coupling between the second device M2and the cable, or transmitted without reflection beyond M2. If, on the contrary, the cable segment S1is degraded by a fault, the signal is entirely or partially reflected from the fault. The measured signal is transmitted by the device M1to the post-processing unit PTR. The signal injected by the device M1may continue to propagate beyond the device M2to the device M3. The monitoring first duration is determined depending on the duration of the measurement that it is desired to carry out to analyze the state of a segment. When the monitoring first duration has elapsed, the control unit CTRL transmits, to the first device M1, a deactivation command. The delay second duration is determined depending on the speed of propagation of the signal through the cable, on the length of a cable segment and on an attenuation coefficient of the signal in the cable. It is especially computed so as to take into account the time that the signal injected by the first device M1will require to travel beyond the second device M2. Moreover, it is recommendable to also take into account potential multiple reflections of this signal from impedance discontinuities. Thus, to compute the delay second duration, a worst-case situation is considered. A worst case is, for example, obtained by considering reflections of the signal occurring just after and just before the second device M2. By considering the average power of the signal and its attenuation coefficient (which depends on the physical characteristics of the cable), the attenuation of the signal over time and on each of the multiple reflections may be computed. When the attenuated signal has a power (or an amplitude) lower than a predetermined threshold, its influence may be considered to be negligible. The delay second duration may therefore be set equal to the cumulative duration of the multiple trips that the signal is expected to make before its power or amplitude drops below a predetermined threshold. More generally, the delay second duration may be defined so as to allow a sufficient margin to prevent interference between the signals transmitted by two neighboring devices. When the delay second duration has elapsed, the control unit CTRL activates monitoring of the second segment S2. To do this, it transmits, to the second device Mz, an activation command, with a view to reiterating the reflectometry analysis for the next cable segment S2. More generally, the control unit CTRL allows for potential interference between signals injected by various devices so as to prevent it. The various reflectometry devices do not need to be precisely synchronized with each other. The test procedure implemented by the control unit CTRL may take various forms. It may consist in successively testing the state of each cable segment S1,S2. . . Sn-1when the targeted objective is to monitor the entire cable. Alternatively, it may also consist in independently testing a cable segment, for example with a view to monitoring the progress of a previously detected degradation. In one variant embodiment of the invention, the control unit CTRL activates monitoring of two or more cable segments simultaneously, when the distance between the two simultaneously monitored segments is large enough that the signal will not be able to propagate this distance without being sufficiently attenuated. In other words, the attenuation coefficient of the signal is taken into account to determine a minimum distance between two devices such that simultaneous activation of these two devices will not generate interference. In yet another variant embodiment of the invention, mutually orthogonal signals are used, for example signals coded by means of a CDMA code (CDMA standing for code-division multiple access). In this case, each device uses a different signal, that is orthogonal to all the others. Thus, it is possible to simultaneously monitor all segments at the same time. In this variant embodiment, the control unit CTRL no longer manages the sequence of the measurements but the distribution of the orthogonal signals between the various devices of the system. The measurements taken are transmitted to a post-processing unit PTR, which performs a reflectometry test with a view to detecting a fault in a cable segment. A reflectometry test consists in identifying, in the obtained measurement of the signal, an amplitude peak characteristic of an impedance discontinuity from which the incident signal has been reflected. FIG.2schematically shows the operating principle of a reflectometry-based diagnostic method applied to a cable segment S1containing a fault DNF, a soft fault for example. The example described below corresponds to a time-domain reflectometry method. A reference signal S, also called the incident signal, is injected into the cable by the device M1. This signal propagates through the line and encounters, during its propagation, a first impedance discontinuity at the start of the fault DNF. The signal is reflected from this discontinuity with a reflection coefficient Γ1. If the characteristic impedance Zc2in the region of the soft fault DNF is less than the characteristic impedance Zc1before the occurrence of the fault, then the reflection coefficient Γ1is negative and results in a peak of negative amplitude in the reflected signal R. In the opposite case, the reflection coefficient Γ1is positive and results in a peak of positive amplitude in the reflected signal R. The transmitted portion T of the incident signal S continues to propagate through the line and then encounters a second impedance discontinuity, creating a second reflection of the incident signal with a reflection coefficient F2of a sign opposite to the first reflection coefficient Γ1. If Γ10, then Γ2>0. If Γ1>0then Γ2<0. The reflected signal R is measured by the device M1. By observing the reflected signal R, the signature of the soft fault DNF is characterized by two successive peaks of opposite signs, as shown inFIG.3. FIG.3shows a time-domain reflectogram that corresponds either directly to the measurement of the reflected signal R or to the intercorrelation between the reflected signal R and the signal injected into the cable S. In the case where the injected reference signal is a time-dependent pulse, this corresponding to the case of a time-domain reflectometry method, the reflectogram may correspond directly to the measurement of the reflected signal R. In the case where the injected reference signal is a more complex signal, for example for MCTDR (multi-carrier time-domain reflectometry) or OMTDR (orthogonal multi-tone time-domain reflectometry) methods, then the reflectogram is obtained by intercorrelating the reflected signal R and the injected signal S. FIG.3shows two reflectograms201,202corresponding, as regards the signal injected into the cable, to two different pulse durations. Curve201corresponds to a pulse duration 2. ΔT much longer than the time taken by the signal to pass through the soft fault DNF. With the length of the fault being denoted Ld, this duration is equal to Ld/V, where V is the propagation speed of the signal through the cable. The curve202corresponds to a pulse duration 2.ΔT much shorter than the time taken by the signal to pass through the soft fault DNF. In both cases, the signature203of the soft fault, in the reflectogram, is the succession of a first peak and second peak the signs of which are opposite. The distance between the two peaks characterizes the length of the soft fault, and their amplitude characterizes the severity of the soft fault. Specifically, the larger the variation in the characteristic impedance, the higher the amplitude of the signature of the soft fault in the reflectogram. As is known in the field of reflectometry-based diagnostic methods, the position dDNFof the soft fault in the cable, or in other words its distance from the point P of injection of the signal, may be obtained by directly measuring, in the time-domain reflectogram ofFIG.3, the duration tDNFbetween the first amplitude peak recorded in the reflectogram (at the x-coordinate 0.5 in the example ofFIG.3) and the amplitude peak203corresponding to the signature of the soft fault. Various known methods may be contemplated for determining the position dDNF. A first method consists in applying the relationship linking distance and time: dDNF=V.tDNF, where V is the speed of propagation of the signal through the cable. In the case where a reflection of the signal occurs at the device located at the end of a cable segment, another possible method consists in applying a proportionality relationship such as dDNF/tDNF=I/t0where I is the length of the cable segment and t0is the duration, measured in the reflectogram, between the amplitude peak corresponding to the impedance discontinuity at the point of injection and the amplitude peak corresponding to the reflection of the signal from the end of the segment. In the case where the cable segment S1is healthy, i.e. fault-free, the signal R is either reflected from the impedance discontinuity caused by the coupling between the second device M2and cable C, or it is not reflected. These two situations may be identified by analyzing the presence or absence of amplitude peaks in the measured reflectogram and the time coordinates of any peaks. For example, if a reflection occurs at a distance larger than the length of a cable segment, this means that there is no fault in the analyzed segment. One advantage of the invention is the ability to provide information on the location of the degradation or fault, located in a particular cable segment. Analysis of the reflectogram moreover allows the fault to be located inside the identified cable segment. FIG.4shows, diagrammatically, one example of embodiment of a reflectometry device M1,M2used to monitor the state of a conductor of a three-phase cable comprising three conductors. The ends of the cable under test are short-circuited CC1, CC2with an adjacent conductor allowing current to flow between these two conductors. This loop may be formed in various ways. A first embodiment example consists in connecting the core of a coaxial cable to its shielding via a resistor at each of the ends of the cable. A second example of embodiment consists in connecting two independent conductors via short circuits CC1,CC2as illustrated inFIG.4. In the example illustrated inFIG.4, the coupling between each device M1,M2and the cable is achieved via a contactless inductive coupler CPL. The inductive couplers are, for example, made up of ferrite toroids T1,T2,T3,T4that are mounted in parallel in the vicinity of a point on the cable. In the example ofFIG.4, the coupler consists of four toroids. Each toroid comprises a plurality of windings of the connecting wire that connects it to the device. The number of toroids and the number of windings is a parameter that allows the gain of the coupling and its constancy as a function of frequency to be controlled, so as to better control the amplification of the signal injected or measured. The example ofFIG.4may be generalized to the monitoring of all three conductors of the cable, one coupler CPL being positioned on each conductor, or to a single cable with a single connector. Without departing from the scope of the invention, the cable and each device may be coupled by other contactless coupling means or via a physical connection to the cable. For example, galvanic coupling may be achieved by stripping the cable and placing it in contact with a metal clamp connected to the device M1,M2. Each reflectometry device may be implemented by means of an on-board processor. The processor may be a generic processor, a specific processor, an application-specific integrated circuit (ASIC) or a field-programmable gate array (FPGA). The device according to the invention may use one or more dedicated electronic circuits or a general-purpose circuit. The technique of the invention may be carried out on a reprogrammable computing machine (a processor or a microcontroller for example) executing a program comprising a sequence of instructions, or on a dedicated computing machine (for example a set of logic gates such as an FPGA or an ASIC, or any other hardware module). The control unit CTRL and the post-processing unit PTR may be implemented by means of a computer or any other equivalent computing device.
17,762
11860216
DETAILED DESCRIPTION OF THE EMBODIMENTS The technical solutions in the embodiments of the present disclosure are clearly and completely described below with reference to the accompanying drawings used in the embodiments of the present disclosure. Apparently, the described embodiments are a part, rather than all, of the embodiments of the present disclosure. Any other embodiments obtained by those of ordinary skill in the art from the embodiments of the present disclosure without any creative effort shall fall within the protection scope of the present disclosure. It is to be understood that, when used in this specification and the appended claims, terms “comprise” and “include” (and variants thereof) indicate existence of described features, entireties, steps, operations, elements and/or components, and do not exclude existence or addition of one or more of other features, entireties, steps, operations, elements, components, and/or combinations thereof. According to the conventional technology, when a fault arc occurs, a current signal in the circuit may be significantly distorted, and a voltage signal is similar to a normal voltage signal. Therefore, in the present disclosure, a current signal is sampled, then AD conversion is performed on the current signal, and then the current signal is analyzed to obtain features of various arc signals. In addition, for the problems of the conventional arc detection method described in the background technology, with the solutions according to the present disclosure, an analog-to-digital conversion is performed on a sampled current signal, and then filtering is performed by using three filters with different pass-bands. For each of outputted half-wave signals after filtering, time-domain eigenvectors and frequency-domain eigenvectors of the half-wave are extracted. Eigenvectors corresponding to an output of a same filter are spliced to obtain a two-dimensional matrix. Feature matrices corresponding to the three filters are stacked to obtain a three-dimensional feature matrix. A two-class processing is performed on the three-dimensional feature matrix by using a two-dimensional convolutional neural network, and it is determined whether an arc occurs in the half-wave based on an outputted probability value. The number of half-waves of fault arcs occurring in an observation time period ΔT is counted, and is compared with a preset threshold. A tripping operation is performed in a case that the number of the half-waves of the fault arcs occurring in the observation time period ΔT exceeds the preset threshold, and no operation is performed in a case that the number of the half-waves of the fault arcs occurring in the observation time period ΔT does not exceed the preset threshold. Hereinafter, the method for detecting a fault arc according to the present disclosure is described with reference toFIGS.1to8. FIG.1shows a flowchart of a method for detecting an arc based on a convolutional neural network according to the present disclosure. First, an analog-to-digital conversion is performed on a sampled current signal, and then the signal after the analog-to-digital conversion is filtered by three band-pass filters with different pass-bands. The pass-bands respectively range from 500 KHZ to 50 MHZ, from 50 MHZ to 100 MHZ, and from 100 MHZ to 200 MHZ. Each of filtered half-waves with a time length of 10 ms is equally divided into 300 segments, and an arc eigenvalue of a high-frequency signal in each of the segments is extracted respectively in a time domain and in a frequency domain. Eigenvalues of a same type of the 300 segments are arranged chronologically to form a 300-dimensional eigenvector. Then, eigenvectors of different types form a feature matrix. The feature matrix is processed by using a multi-channel two-dimensional convolutional neural network. Before using the neural network for online determination, it is required to train the neural network to obtain and save an optimal model. The number of fault half-waves in an observation time period ΔT is counted and then is compared with a preset threshold to determine whether to perform a tripping operation. The above process is described in detail below. In detecting an arc signal, since the arc signal is non-stationary, each of the half-wave signals is processed by time segments. Based on this idea, in a preferred embodiment, a 10 ms half-wave is divided into 300 segments, a time-domain feature and a frequency-domain feature of each of the segments are extracted, and eigenvalues extracted from the 300 segments are arranged chronologically to form a 300-dimensional eigenvector. It is required to perform waveform preprocessing based on the time dispersion, the amplitude dispersion, and the number of waveforms in the time-domain feature. The waveform preprocessing is performed by: for each of the original waveforms, eliminating a non-local extreme point of the waveform, and connecting remaining sampling points in sequence to obtain a new waveform. The new waveform includes only local maximum points and local minimum points in the original waveform.FIG.7shows waveforms after eliminating non-extreme points based on the above description. In performing time-domain feature analysis, the calculation of the time dispersion is shown inFIG.7. Based on the preprocessed waveform, the calculation is performed by dividing a sum of absolute values of time differences between adjacent waveforms by a sum of the time periods. The calculation is performed by using the following equation: Time⁢dispersion=❘"\[LeftBracketingBar]"T2-T1❘"\[RightBracketingBar]"+❘"\[LeftBracketingBar]"T3-T2❘"\[RightBracketingBar]"+❘"\[LeftBracketingBar]"T4-T3❘"\[RightBracketingBar]"❘"\[LeftBracketingBar]"T1❘"\[RightBracketingBar]"+❘"\[LeftBracketingBar]"T2❘"\[RightBracketingBar]"+…+❘"\[LeftBracketingBar]"T4❘"\[RightBracketingBar]" As shown inFIG.7, in the above equation, Tirepresents a time interval between two adjacent minimum values, and represents a time length of a waveform unit. In performing time-domain feature analysis as shown inFIG.7, the amplitude dispersion is calculated by: dividing a sum of absolute values of amplitude differences of adjacent waveforms by a sum of amplitudes in the segment. The calculation is performed by using the following equation: Amplitude⁢dispersion=❘"\[LeftBracketingBar]"VF⁢H-VD⁢F❘"\[RightBracketingBar]"+❘"\[LeftBracketingBar]"VH⁢J-VF⁢H❘"\[RightBracketingBar]"+❘"\[LeftBracketingBar]"VJ⁢P-VH⁢J❘"\[RightBracketingBar]"❘"\[LeftBracketingBar]"VD⁢E❘"\[RightBracketingBar]"+❘"\[LeftBracketingBar]"VF⁢G❘"\[RightBracketingBar]"+…+❘"\[LeftBracketingBar]"VM⁢P❘"\[RightBracketingBar]"where each of a difference between VFHand VDF, a difference between VHJand VFHand a difference between VHJand VFHin the numerator represents an amplitude difference between two adjacent minimum points, and each of VDE, VFGand VMPin the denominator represents an amplitude difference between a minimum point and a maximum point adjacent to the minimum point. In an embodiment, it is assumed that y represents a sequence of the new waveform obtained by preprocessing the waveform, then the number N of the waveforms is calculated by using the following equation: N=⌊length⁡(y)-12⌋where length(y) represents a sequence length of the preprocessed waveform, and └g┘ represents a rounding down operation. The frequency-domain feature is extracted from the filtered signal. Same frequency-domain processing is performed on each of signals outputted from the three filters. The FFT eigenvalues are extracted by performing the following steps 1 to 6. In step 1, data of each of the 10 ms half-waves filtered by different sub-band filters is divided into 300 segments. In step 2, a 1024-point FFT transform is performed on data in each of the segments. Assuming that L represents a length of the data in each of the segments, the 1024-point FFT transform is performed on the data in each of the segments for M=⌊L1⁢0⁢2⁢4⌋ times. In step 3, 37 frequency channels are selected from FFT operation results corresponding to two pass-bands, and FFT transform values of M same frequency points in a segment form an M-dimensional eigenvector. In step 4, median filtering is performed on M-dimensional eigenvectors respectively corresponding to 37 frequency points to obtain 37 median filtered eigenvectors. In step 5, the median filtered eigenvectors corresponding to 37 frequency points are summed according to the frequency points to obtain eigenvalues corresponding to 37 frequency points in each of the segments. In step 6, the above operations are performed on each of the 300 segments of each of the half-waves, and eigenvalues of the 300 segments in a same FFT channel form an eigenvector. The eigenvectors corresponding to the 37 frequency points form a 37*300 feature matrix. Described above are eigenvalues designed and adopted in a preferred embodiment of the present disclosure, and the eigenvalues to be processed by the method for detecting an arc based on the multi-channel two-dimensional convolutional neural network are not limited to the eigenvalues mentioned above. With the time-domain feature analysis and the frequency-domain feature analysis, multiple eigenvectors may be obtained. Before processing the eigenvectors by using the neural network, it is required to perform normalization on the eigenvectors to eliminate the influence of the dimensions of different eigenvalues. Since each of the half-waves is divided into 300 segments and the obtained eigenvectors are 300-dimensional, normalization is performed on each of the eigenvectors by using the following equation: x[n]=x[n]-min⁡(X)max⁡(X)-min⁡(X)where x[n] represents an n-th element in the eigenvector, and x[n] represents an element after normalization; X represents the eigenvector; max(X) represents an element with a maximum value in the eigenvector X; and min(X) represents an element with a minimum value in the eigenvector X. In an embodiment, based on the image processing method by using a convolutional neural network, the eigenvectors after normalization are spliced to obtain a feature matrix in the detection method by using the neural network according to the present disclosure. Two-dimensional feature matrices corresponding to different filters are similar to different channels in an image. For each of signals outputted from the filters, 3 time-domain eigenvectors and 37 eigenvectors are extracted, each of the half-waves is divided into 300 segments, and the eigenvectors are spliced to obtain a 40*300 feature matrix. The two-dimensional feature matrices corresponding to the three filters may be stacked. As shown inFIG.3, three 40*300 matrices are stacked to obtain a 40*300*3 three-dimensional matrix, where 3 indicates the number of channels of the feature matrix. In the embodiment, a topology structure of the neural network is shown inFIG.2. With reference to the process of the neural network shown inFIG.4, the topology structure of the neural network is briefly described in the following steps 1 to 8. In step 1, a 40*300*3 three-dimensional feature matrix corresponding to each of half-waves is inputted through an input layer, and then is processed by two convolution layers. In step 2, a first convolution layer has three 5*5*3 convolution kernels, where the number 3 in 5*5*3 indicates that the number of the convolution kernel is same as the number of channels of the inputted feature matrix. Each of the convolution kernels outputs a 36*296 result. The three convolution kernels of the first convolution layer correspond to three channels. Thus, the first convolution layer outputs a 36*296*3 result. In step 3, a first pooling layer having a 6*8 pooling window performs dimension reduction on the output of the first convolution layer to output a 6*37*3 result. In step 4, the output of the first pooling layer is inputted to a second convolution layer which has five 3*3*3 convolution kernels, and a 4*35*5 result is outputted. In step 5, a pooling layer having a 2*2 pooling window performs dimension reduction on the result outputted from the second convolution layer, and output a 2*17*5 result. In step 6, a Flatten layer stretches the three-dimensional matrix to obtain a one-dimensional vector including 170 elements. In step 7, the one-dimensional vector is inputted to a fully connected layer having 64 neurons, and then is inputted to a fully connected layer having 32 neurons, and then is inputted to an output layer having one neuron. In step 8, after the neuron in the output layer perform processing, the output layer outputs a probability value for performing two-class processing to determine whether an arc occurs or no arc occurs. The Dropout layer inFIG.2is mainly used in training to reduce overfitting in training. In the embodiment, a multi-channel two-dimensional convolution operation is performed. In performing the multi-channel two-dimensional convolution operation, a two-dimensional convolution operation is performed for each of channels, then a convolution result for each of the channels are summed, and then a bias value is added. The calculation is performed by using the following equation: yn=∑k=1K∑i=1M∑j=1N(xi,j,k⁢gai,j,k)+bnwhere K represents the number of channels, M represents the number of rows of a convolution kernel in each of the channels, N represents the number of columns of the convolution kernel in each of the channels, ynrepresents a convolution output result, bnrepresents a direct-current bias in a linear operation, ai,j, krepresents a weighting coefficient in the linear operation, and xi,j,krepresents an originally inputted feature element or an output result of a previous convolution layer. In the embodiment, the convolution operation is performed with a stride of 1. As shown inFIG.5, in performing a next convolution operation, a row sliding is performed on the matrix inputted to the convolution layer according to the stride. First, the row is fixed, and a column sliding is performed until sliding to the end of the column, and then a row sliding is performed along the direction of the row according to the stride. It is assumed that an original matrix is an A*B*K matrix, where K represents the number of channels in the data matrix. A convolution operation with an M*N*K convolution kernel is performed, then a (A−M+1)*(B−N+1) result is outputted. The number of the channels of the convolution operation result is determined by the number of the convolution kernels. In the embodiment, dimensionality reduction is performed on the convolution result by using a pooling layer, a MaxPooling2D pooling layer is used. As shown inFIG.6, it is assumed that a convolution result has a 6*4 channel matrix and a 3*2 pooling window is used in the pooling process, the pooling process is performed by using the following equations: {a1⁢1=max⁡(A,B,E,F,I,J)a1⁢2=max⁡(C,D,G,H,K,L)a2⁢1=max⁡(M,N,Q,R,U,V)a2⁢2=max⁡(O,P,S,T,W,X) Thus, the pooling process outputs a 2*2 result. During the pooling process, adjacent pooling operation windows do not overlap with each other. In the embodiment, the outputs by the neurons in the fully connected layers and the output layer are obtained by using the following calculation: yn=∑i=1Nai⁢gxi+bnwhere ynrepresents an output of the fully connected layer or the output layer after performing a linear operation, xirepresents a weighting coefficient for operations in the fully connected layer or the output layer, xirepresents an input to the fully connected layer or the output layer, and bnrepresents a direct current bias in the linear operation. N is 170 for the calculation of the neuron in a first fully connected layer. N is 64 for the calculation of the neuron in the second fully connected layer. N is 32 for the calculation of the neuron in the output layer. In the embodiment, the convolution layers and the fully connected layers adopt a ReLu activation function, which is expressed as: relu(x)=max(0,x)where x represents a weighted sum result after convolution operations or a weighted sum result after processing by the fully connected layer. In the embodiment, the output layer adopts a sigmoid function, which is expressed as: sigmoid(x)=11+e-xwhere x represents a weighted sum result of a last fully connected layer. A result outputted by the output layer ranges from 0 and 1, which represents a probability of a classification result being 0 or 1. In the embodiment, a result outputted by an activation function of the output layer is classified based on a threshold of 0.5, which is expressed as: y={0,sigmoid(x)>0.51,sigmoid(x)<0.5where x represents an output of a neuron in the output layer; and y represents a determination result of a half-wave, where that the half-wave is a normal half-wave in a case that y=0, and the half-wave is determined as a fault arc half-wave in a case that y=1. In the detection method, before the neural network model is used for determination, it is required to train the neural network model offline based on training data to obtain and save a model with best performance. Then, online determination is performed on the obtained feature matrix by using the trained model. In collecting data in a laboratory, there may be a case in which data is labeled as arcing data while no arc occurs on an arc generator or a carbonized cable, thus it is required to clean and eliminate data before the data is provided to the neural network for training. According to the present disclosure, a voltage at a position at which a series fault arc occurs and a current at a position at which a parallel fault arc occurs are measured to determine whether the collected experimental data indicates an arcing occurs.FIG.8andFIG.10show the circuits. In the series arc experiment, there are two cases in which there is no arcing. In one case, an iron rod in the arc generator is completely separated from a carbon rod in the arc generator, or two wires in the carbonized cable are separated from each other. In this case, the voltage across the arc generator or the carbonized cable is a standard line voltage, as shown by line c inFIG.9. In the other case, an iron rod in the arc generator is in complete contact with a carbon rod in the arc generator, or two wires in the carbonized cable are connected to each other. In this case, the voltage across the arc generator or the carbonized cable fluctuates within a small range around zero, as shown by line b inFIG.9. When an arc occurs, the voltage is lower than the standard line voltage and is seriously distorted, as shown by line a inFIG.9. Therefore, the experimental data mislabeled as arcing data may be eliminated based on a waveform of the measured voltage. In the parallel arc experiment, there are two cases in which there is no arcing. In one case, two wires in the cable are separated from each other, and a current at a position at which the arc occurs is close to zero, as shown by line c inFIG.11. In the other case, two wires in the cable are short-circuited, and a current at a position at which the arc occurs is very large, as shown by line b inFIG.11. When an arc occurs, the current is less than a line conduction current and there is a flat shoulder feature which is an arc symbolic feature, as shown in line a inFIG.11. Due to the short acquisition time period for the parallel arc, an arc may be determined by manually checking whether the current has a flat shoulder feature. In an embodiment, a breaking time of a circuit breaker varies with a current. Therefore, in addition to perform determination on the half-wave, it is required to perform determination on all half-waves in the observation time period ΔT by using the neural network, and it is determined whether to perform a tripping operation based on a determination result of the half-waves in the observation time period. In an embodiment, the determination is performed by performing the following steps 1 to 4. In step 1, an observation time period ΔT and a fault half-wave number threshold in the observation time period are determined by querying a table based on a calculated measurement current. In step 2, half-waves in the ΔT are detected and determined by using detection method based on the neural network, determination results are outputted, and a determination result vector is obtained based on the determination results. In step 3, elements in the determination result vector are summed up to obtain the number of fault half-waves in the observation time period. The calculation is performed by using the following equation: N=∑i=1[Δ⁢T/10]yiwhere yirepresents a determination result of an i-th half-wave in the observation time period, a determination result equal to 0 indicates that the half-wave is a normal half-wave, and a determination result of equal to 1 indicates that an arcing occurs;ΔT/10represents the number of half-waves in the observation time period ΔT, andgpresents a rounding up operation. In step 4, the number of the half-waves in the observation time period ΔT obtained above is compared with a threshold to determine whether to perform a tripping operation. Compared with the conventional method in which a single eigenvalue is obtained and then the single eigenvalue is compared with a threshold to determine a half-wave is a fault arc half-wave, with the method based on a convolutional neural network according to the present disclosure, a higher accuracy and higher reliability can be achieved in identifying a fault arc half-wave, and adaptability can be achieved in performing training for different load conditions. Described above are only specific embodiments of the present disclosure, and the protection scope of the present disclosure is not limited thereto. Various modifications or substitutions equivalent to the embodiments can be easily made by those skilled in the art within the technical scope disclosed by the present disclosure. These modifications or substitutions should fall within the protection scope of the present disclosure. Therefore, the protection scope of the present disclosure should be subject to the protection scope defined in the claims.
22,222
11860217
DETAILED DESCRIPTION OF THE PRESENT INVENTION In order to facilitate the understanding of the present application, the present application will be described more fully below with reference to the relevant drawings. Preferred embodiments of the present application are shown in the drawings. However, the present application may be implemented in many different forms and is not limited to the embodiments described herein. On the contrary, these embodiments are provided to make the disclosure of the present application more thorough and comprehensive. Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by a person of ordinary skill in the art to which the present application belongs. Here, terms used in the description of the present application are merely intended to describe specific embodiments, rather than limiting the present application. As used herein, the term “and/or” includes any or all of one or more associated items listed here or combinations thereof. In the description of the present application, it should be understood that orientations or location relationships indicated by terms such as “upper”, “lower”, “vertical”, “horizontal”, “inner”, “outer” are the directions and the location relationships illustrated on the basis of the drawings, and used just for convenience of describing the present application and simplifying the description, rather than indicating or implying that the devices or elements must have a specific orientation and be constructed and operated in the specific orientation, and therefore shall not be considered as any limitations to the present application. As shown inFIG.1, the present application provides a test circuit comprising: M stages of test units10, a test unit10in each stage having a first terminal, a second terminal, a third terminal, and a fourth terminal, first terminals of test units10in each stage being all connected to a power wire (i.e., a line to which the upper terminals of test units10in each stage are connected), second terminals of test units10in each stage being all connected to a ground wire (not shown), third terminals of test units10in the first stage being connected to the power wire, and third terminals of test units10in the ithstage being connected to fourth terminals of test units10in the (i−1)thstage; wherein, M is a positive integer greater than or equal to 2 and i is a positive integer greater than or equal to 2. In an example, a test unit10in each stage comprises an electro-migration test element101, a switch102, and a control circuit103, wherein a first terminal of the electro-migration test element101is the first terminal of the test unit10being connected to power wire, a second terminal of the electro-migration test element101is connected to a first terminal of the switch102and a first terminal of the control circuit103, a control terminal of the switch102is the third terminal of the test unit10, and a second terminal of the switch102is the second terminal of the test unit10, and a second terminal of the control circuit103is the fourth terminal of the test unit10. Specifically, the values of M and i may be set according to actual needs and will not be limited here. In an example, the electro-migration test element101may be a metal wire; the length of the electro-migration test element101may be set according to actual needs, for example, the length of the electro-migration test element101may be 700 μm (micrometers) to 2000 μm; specifically, the length of the electro-migration test element101may be 700 μm, 1000 μm, 1500 μm, or 2000 μm. It should be noted that, in a specific embodiment, the length of the electro-migration test element101is not limited to the above-mentioned numerical value. In an example, the electro-migration test element101has a first equivalent resistance before being burned, the electro-migration test element101has a second equivalent resistance after being burned, and the first equivalent resistance is less than the second equivalent resistance. In an example, the ratio of the second equivalent resistance to the first equivalent resistance may be set according to actual needs. For example, the ratio of the second equivalent resistance to the first equivalent resistance may be greater than or equal to 100. Specifically, the ratio of the second equivalent resistance to the first equivalent resistance may be 100, 150, 200, 250, 300, etc. It should be noted that, in a specific embodiment, the ratio of the second equivalent resistance to the first equivalent resistance is not limited to the above-mentioned numerical value. In an optional example, M is greater than or equal to 2, and lengths of the electro-migration test elements101in the test units10in all stages are the same, and widths of the electro-migration test elements101in the test units10in all stages are the same. That is, the electro-migration test elements101in the test units10in all stages are completely the same. In this case, multiple same electro-migration test elements101may be tested to analyze the consistency of the electro-migration test elements101and improve the accuracy of the test results. In another optional example, M is greater than or equal to 2, and lengths of the electro-migration test elements101in the test units10in all stages are different, and widths of the electro-migration test elements101in the test units10in all stages are the same. That is, the electro-migration test elements101in the test units10in all stages are different. In this case, multiple different electro-migration test elements101with different lengths may be tested in sequence, which significantly shortens the time required for the test of the multiple electro-migration test elements101and greatly improves the test efficiency. In still another example, M is greater than or equal to 2, and lengths of the electro-migration test elements101in the test units10in all stages are different, and the widths of the electro-migration test elements101in the test units10in all stages are different. That is, the electro-migration test elements101in the test units10in all stages are different. In this case, multiple different electro-migration test elements101with different lengths and different widths may be tested in sequence, which significantly shortens the time required for the test of the multiple electro-migration test elements101and greatly improves the test efficiency. In yet another example, M is greater than or equal to 2, and lengths of the electro-migration test elements101in test units10in all stages are not exactly the same, and widths of the electro-migration test elements101in test units10in all stages are not exactly the same. That is, some of the electro-migration test elements101in the test units10in M stages are the same and some are different. In this case, multiple different electro-migration test elements101may be tested in sequence, which significantly shortens the time required for the test of multiple electro-migration test elements101and greatly improves the test efficiency. In addition, multiple same electro-migration test elements101may be provided and tested to improve the accuracy of the test structure. In an example, the electro-migration test element101may be a metal wire, a polycrystalline silicon wire, etc., and the metal wire may be one or more of tungsten, aluminum, and copper. The first equivalent resistance value of the electro-migration test element101before being turned out is much less than the on-resistance value of the switch102, for example, the first equivalent resistance value is less than one-tenth of the on-resistance value of the switch102. The second equivalent resistance value of the migration test element101after being burned out is much greater than the on-resistance value of the switch102, for example, the second equivalent resistance value is greater than ten times the on-resistance value of the switch102. In an example, the switch102comprises an NMOS transistor, and the gate, drain, and source of the NMOS transistor correspond to the control terminal, the first terminal, and the second terminal of the switch, respectively. That is, the drain of the NMOS transistor is the first terminal of the switch102and is electrically connected to the second terminal of the electro-migration test element101, and the source of the NMOS transistor is the second terminal of the switch102, which is grounded. In an example, the control circuit103comprises an inverter, and an input terminal and an output terminal of the inverter correspond to the first terminal and the second terminal of the control circuit103, respectively. That is, the input terminal of the inverter is the first terminal of the control circuit103and is electrically connected to the second terminal of the electro-migration test element101, and the output terminal of the inverter is the fourth terminal of the test unit10and is electrically connected to the gate of the NMOS transistor in a test unit10in the next stage. Referring toFIGS.2to4, the test principle of the test circuit of the present application will be described below. First, at the beginning of the test, the gates of the NMOS transistors in the test units10in the first stage are connected to the test line, and the NMOS transistors are turned on at this time. That is, the test units10in the first stage are turned on to start testing the electro-migration test elements101in the test units10in the first stage. Since, at this time, the gates of the NMOS transistors in the test units10in the second stage to the Mthstage are all connected to the output terminals of the inverters in the test units10in the previous stage and the electro-migration test elements101in the second stage to the Mthstage are not burned out, the first terminals of the inverters are at a high level and the second terminals of the inverters are at a low level, and the NMOS transistors in the test units10in the second stage to the Mthstage are all turned off. That is, at this time, the testing of the electro-migration test elements101in the test units10in the second stage to the Mthstage is not started, as shown inFIG.2. Then, when the electro-migration test elements101in the test units10in the first stage are burned out, the first terminals of the inverters in the test units10in the first stage become low level and the second terminals of the inverters become high level. At this time, the NMOS transistors in the test units10in the second stage are turned on to start testing the electro-migration test elements101in the test units10in the second stage, as shown inFIG.3. Then, when the electro-migration test elements101in the test units10in the second stage are burned out, the first terminals of the inverters in the test units10in the second stage become low level and the second terminals of the inverters become high level. At this time, the NMOS transistors in the test units10in the third stage are turned on to start testing the electro-migration test elements101in the test units10in the third stage, as shown inFIG.4. This process is repeated until the electro-migration test elements101in the test units10in the Mthstage (the last stage of the test circuit) have been tested. It may be known from the above that the test circuit of the present application can automatically perform electro-migration tests on multiple test units10in sequence, which greatly improves the test efficiency. As shown inFIG.5, the present application further provides a semiconductor test method, which specifically comprises: S11: connecting the power wire in the test circuit as shown inFIG.1to a power supply device outside the chip; S12: monitoring the change in voltage and current of the power supply device; and S13: obtaining the lifetime of all the electro-migration test elements according to the monitored voltage and current. The specific structure of the test circuit has been shown inFIGS.1to4and described by related text descriptions and will not be repeated here. In the step S11, after connecting the power wire in the test circuit shown inFIG.1to a power supply device outside the chip, a same test voltage is applied to each electro-migration test element101. First, at the beginning of the test, the gates of the NMOS transistors in the test units10in the first stage are connected to the test line, and the NMOS transistors are turned on at this time. That is, the test units10in the first stage are turned on to start testing the electro-migration test elements101in the test units10in the first stage. Since, at this time, the gates of the NMOS transistors in the test units10in the second stage to the Mthstage are all connected to the output terminals of the inverters in the test units10in the previous stage and the electro-migration test elements101in the second stage to the Mthstage are not burned out, the first terminals of the inverters are at a high level and the second terminals of the inverters are at a low level, and the NMOS transistors in the test units10in the second stage to the Mthstage are all turned off. That is, at this time, the testing of the electro-migration test elements101in the test units10in the second stage to the Mthstage is not started, as shown inFIG.2. Then, when the electro-migration test elements101in the test units10in the first stage are burned out, the first terminals of the inverters in the test units10in the first stage become low level and the second terminals of the inverters become high level. At this time, the NMOS transistors in the test units10in the second stage are turned on to start testing the electro-migration test elements101in the test units10in the second stage, as shown inFIG.3. Then, when the electro-migration test elements101in the test units10in the second stage are burned out, the first terminals of the inverters in the test units10in the second stage become low level and the second terminals of the inverters become high level. At this time, the NMOS transistors in the test units10in the third stage are turned on to start testing the electro-migration test elements101in the test units10in the third stage, as shown inFIG.4. This process is repeated until the electro-migration test elements101in the test units10in the Mthstage (the last stage of the test circuit) have been tested. It may be known from the above that the test circuit of the present application can automatically perform electro-migration tests on multiple test units10in sequence, which greatly improves the test efficiency. In the step S12, the voltage and current of the power supply device are monitored. The change in voltage and current corresponding to the breakdown of each electro-migration test element101is recorded. The change in current on the electro-migration test element ranges from 1×105A/cm2to 1×1010A/cm2. In the step S13, the lifetime of all the electro-migration test elements101is obtained according to the monitored voltage and current. That is, the lifetime of the electro-migration test elements101in the test units10in stages may be recorded stage by stage. To record the lifetime of each electro-migration test element101, timing is started when a test voltage is applied to the electro-migration test element101. The time from then on to the burning out of the electro-migration test element101is the lifetime of the electro-migration test element101. Various technical features of the above embodiments can be arbitrarily combined. For simplicity, not all possible combinations of various technical features of the above embodiments are described. However, all those technical features shall be included in the protection scope of the present disclosure if not conflict. The embodiments described above merely represent certain implementations of the present application. Although those embodiments are described in more specific details, it is not to be construed as any limitation to the scope of the present application. It should be noted that, for a person of ordinary skill in the art, a number of variations and improvements may be made without departing from the concept of the present application, and those variations and improvements should be regarded as falling into the protection scope of the present application. Therefore, the protection scope of the present application should be subject to the appended claims.
16,465
11860218
DETAILED DESCRIPTION OF THE INVENTION It should be observed that in the following description, identical or analogous blocks, components or modules are indicated in the figures with the same numerical references, even where they are illustrated in different embodiments of the invention. Referring toFIG.1, it shows a block diagram of an electronic system10for testing the operation of an electronic circuit2according to the invention, in which said test is performed in an operating condition of the electronic circuit2which is defined (i.e., a known operating condition). For example, the test of the operation of the electronic circuit2under test is carried out when the electronic circuit2under test is activated, i.e., when it switches from a condition in which it is not powered to a condition in which it is powered (e.g., in the automotive field when the motor vehicle is switched on after a condition in which it is switched off). Note that more generally the invention is applicable to the testing of one or more electronic devices, but for the sake of simplicity for the purpose of the explanation of the invention, only one electronic circuit2under test will be considered. The electronic system10has the function of performing a self-diagnosis of the electronic circuit2under test in a defined (i.e., known) operating condition, such as when it is activated. The electronic circuit2under test is for example used in the automotive field and thus it is a component mounted on a motor vehicle, in particular it could be for example one of the following components: a driving circuit for a power converter, a voltage or current sensor, a control logic circuit. The electronic system10is such to generate a diagnosis signal S_d representative of a correct operation or of an incorrect operation of the electronic circuit2under test. The electronic system10comprises:an electronic driving device6;an electronic monitoring circuit4connected to the electronic driving device6;a switch mode power supply3connected to the electronic driving device6;the electronic circuit2under test connected to the switch mode power supply3;a processing unit5connected to the electronic monitoring circuit4. The electronic driving device6has the function of generating the control signal S_pwm_ctrl of the pulse-width modulation (PWM) type, which is used by the switch mode power supply3to control the periodic opening and closing of one or more power switches inside the switch mode power supply3. In particular, the control signal S_pwm_ctrl is a periodic pulsed signal (with a typically square wave trend, seeFIGS.3A and3B) and having a duty cycle which can vary over time (both increasing and decreasing), in which the duty cycle refers to the ratio between the temporal width of the portion of each pulse when it is active (i.e., when the pulse of the control signal S_pwm_ctrl has a high value) and the total duration of the same period of the control signal S_pwm_ctrl. For example, in the case of an application in the automotive industry, the driving device6is positioned inside the DC/DC battery charger. The control signal S_pwm_ctrl is also used by the electronic monitoring circuit4, as will be explained in more detail later. The switch mode power supply3has the function of providing the supply voltage and current of the electronic circuit2under test. The term “switch mode power supply” (or “switch mode converter”) means an electronic device which provides supply voltage and current to another electronic device or circuit using one or more power switches which periodically switch between an open position (where they are substantially equivalent to an open circuit) and a closed position (where they are substantially equivalent to a short circuit) as a function of suitable pulse-width modulation (PWM) control signals, wherein the output voltage generated is controlled by means of the variation of the duty cycle of said PWM control signals. The power switches are typically implemented with MOSFET-type or bipolar junction transistors. The switch mode power supply3is connected at the input to the electronic driving device6and at the output to the electronic circuit2under test. In particular, the switch mode power supply3comprises an input terminal adapted to receive a control signal S_pwm_ctrl of the pulse-width modulation type, having a periodic trend such as for example shown inFIGS.3A and3Bin which the period is indicated with T1and the pulse width is indicated with ΔT1, ΔT2, ΔT3. The switch mode power supply3further comprises an output terminal adapted to generate a voltage signal VDD for supplying the electronic circuit2under test. Therefore the switch mode power supply3comprises at least one or more switches which are configured to switch between an open and a closed position, as a function of the value of the pulse-width modulation control signal S_pwm_ctrl. The switch mode power supply3further comprises one or more electrical (e.g., capacitors) or magnetic (e.g., inductors, transformers) energy storage components which have the function of storing electrical or magnetic energy and then transferring it in output, generating the desired voltage and/or current value. Said switches of the switch mode power supply3are typically implemented with power transistors, for example of the MOSFET or bipolar junction type. The switch mode power supply3is for example a direct-direct voltage converter adapted to receive a direct voltage as input and to generate a direct voltage as output having a different value (for example lower) with respect to that input. For example, in the case of application in the automotive field, the switch mode power supply3is a direct-direct voltage converter which receives the battery voltage equal to 12 Volts as input and generates a supply voltage equal to 5 Volts or 3.3 Volts as output, which is used to supply the electronic circuits mounted in the motor vehicle, in particular the circuits positioned inside the DC/DC battery charger. An example of a direct-direct converter which can be used as a switch mode power supply3in the automotive field for hybrid vehicles is for example disclosed in the European patent application with publication number 1677410, in which (see FIG. 1) switches 1, 2, 3, 4 are present on the high voltage side, which are controlled respectively by the signals Vg(1), Vg(2), Vg(3), Vg(4) (see FIG. 4). The electronic monitoring circuit4has the function of detecting a variation in the power or current absorbed by the electronic circuit2under test, in order to detect if the electronic circuit2under test operates correctly or if it operates incorrectly (e.g., because there is a fault of one of its components). More in particular, the electronic monitoring circuit4is such to detect said variation in the power or current absorbed by the electronic circuit2under test by measuring the variation of the pulse width (i.e., the variation of the duty cycle) of the pulse-width modulation control signal S_pwm_ctrl received as input to the switch mode power supply3: said variation of the pulse width (i.e., the variation of the duty cycle) of the control signal S_pwm_ctrl has been indicated with ΔT+ and ΔT− inFIGS.3A-3B, respectively. Therefore the electronic monitoring circuit4is connected at the input to the electronic driving device6and at the output to the processing unit5. In particular, the electronic monitoring circuit4comprises an input terminal adapted to receive the pulse-width modulation control signal S_pwm_ctrl and comprises an output terminal adapted to generate an output signal S_ΔT representative of a variation of the pulse width (or of its duty cycle) of the pulse-width modulation control signal S_pwm_ctrl (seeFIGS.3A and3B, diagram below), in which said variation of the duty cycle is a function of the power or current absorbed by the electronic circuit2under test. For example,FIG.3Ashows that at the instant t3the pulse of the pulse-width modulation control signal S_pwm_ctrl has a width ΔT1(time interval of the portion of the period T1in which it has a high value), then in the period following the instant t10the pulse of the pulse-width modulation control signal S_pwm_ctrl has a width ΔT2greater than ΔT1, i.e., at the instant t10the pulse of the pulse-width modulation control signal S_pwm_ctrl has had an increase equal to ΔT+ (i.e., the duty cycle has increased by ΔT+/T1): this represents the condition in which an increase in power or current absorbed by the electronic circuit2under test has occurred and such an increase may be indicative of a condition of incorrect operation of the electronic circuit2under test, in the case in which said increase ΔT+ is greater than an expected value in a defined (i.e., known) operating condition. In addition,FIG.3Ashows at the bottom the trend of the output signal S_ΔT which has a zero value between the instants t0and t11(as it is assumed that there are no variations in the power absorbed by the electronic circuit2under test), so at the instant t11the output signal S_ΔT begins to have an increasing trend due to the detected increase in the width of the control signal S_pwm_ctrl (caused by an increase in the power absorbed by the electronic circuit2under test), then between the instants t11and t12the output signal S_ΔT continues to have an increasing trend until reaching the positive value ΔT+ at the instant t12, finally between the instants t12and t22the output signal S_ΔT maintains the constant value equal to ΔT+ as it is assumed that no further increases or decreases in the pulse width of the control signal S_pwm_ctrl occur (i.e., no variations in the power absorbed by the electronic circuit2under test). Subsequently, after a few cycles in which the increase ΔT+ of the pulse stabilizes, the output signal S_ΔT returns to having at the instant t23the same value that it had between the instants t0and t11prior to the occurrence of the increase ΔT+ (i.e., the output signal S_ΔT returns to the zero value), since the outputs of the two RC circuits reached the same final value at full speed: in this way, the variation in the power or current absorbed by the electronic circuit2under test was detected. Therefore, the greater the variation of the pulse width of the control signal S_pwm_ctrl over time, the greater the transient difference in electrical potential between the output of the first RC circuit and the output of the second RC circuit at a given instant of time. Similarly,FIG.3Bshows that at the instant t33the pulse of the pulse-width modulation control signal S_pwm_ctrl has a width ΔT1(time interval of the portion of the period T1in which it has a high value), then in the period following the instant t40the pulse of the pulse-width modulation control signal S_pwm_ctrl has a width ΔT3less than ΔT1, i.e., between the instant t40and t41the pulse of the pulse-width modulation control signal S_pwm_ctrl has had a decrease equal to ΔT− (i.e., the duty cycle has decreased by ΔT−/T1): this represents the condition in which a decrease in the power or current absorbed by the electronic circuit2under test has occurred and such decrease may be indicative of a condition of incorrect operation of the electronic circuit2under test, in case said decrease ΔT− is greater (in absolute value) than an expected value in a defined (i.e., known) operating condition. In addition,FIG.3Bshows at the bottom the trend of the output signal S_ΔT which has a zero value between the instants t30and t41(since it is assumed that there are no variations in the power absorbed by the electronic circuit2under test), then at the instant t41the output signal S_ΔT begins to have a decreasing trend due to the detected decrease in the width of the control signal S_pwm_ctrl (caused by a decrease in the power absorbed by the electronic circuit2under test), then between the instants t41and t42the output signal S_ΔT continues to have a decreasing trend until reaching the negative value ΔT− at the instant t42, finally between the instants t42and t52the output signal S_ΔT maintains the constant value equal to ΔT− since it is assumed that no further increases or decreases in the pulse width of the control signal or S_pwm_ctrl occur (i.e., no variations in the power absorbed by the electronic circuit2under test). Subsequently, after a few cycles in which the decrease ΔT− of the pulse stabilizes, the output signal S_ΔT returns to having at the instant t53the same value that it had between the instants t30and t41prior to the occurrence of the decrease ΔT− (that is, the output signal S_ΔT returns to the zero value), since the outputs of the two RC circuits reached the same final value at full speed: in this way, the variation in the power or current absorbed by the electronic circuit2under test was detected. The processing unit5(e.g., a microprocessor or programmable electronic device) is connected to the electronic monitoring circuit4and is such to receive therefrom a signal S_ΔT representative of the variation of the pulse width (or of the duty cycle) of the pulse-width modulation control signal S_pwm_ctrl, in which said variation of the pulse width (or of the duty cycle) is a function of the power or current absorbed by the electronic circuit2under test. The processing unit5has the function of generating the diagnosis signal S_d representative of a correct operation or of an incorrect operation of the electronic circuit2under test, as a function of the comparison between the signal S_ΔT representative of the variation of the pulse width (or of its duty cycle) of the control signal S_pwm_ctlr and an expected value P_ex associated with the power or current absorbed by the electronic device2in a defined (i.e. known) operating condition of the electronic circuit2under test. For example, the diagnosis signal S_d is a logic signal which has a low logic value indicating a correct operation of the electronic circuit2under test and a high logic value indicating an incorrect operation of the electronic circuit2under test. An example of the expected value P_ex is that of a range of expected values defined as a function of the maximum variation in the power absorbed by the electronic circuit2under test, that is, the maximum variation in power that the electronic circuit2under test can absorb under a defined (i.e., known) operating condition: in this case, the processing unit5generates the diagnosis signal S_d having a value indicative of a correct operation of the electronic circuit2under test (e.g., a low logic value) in the case in which the value of the variation ΔT+/ΔT− of the signal S_ΔT has a value which is within said range of expected values, while the processing unit5generates the diagnosis signal S_d having a value indicative of an incorrect operation of the electronic circuit2under test (e.g., a high logic value) in the case in which the value of the variation ΔT+/ΔT− of the signal S_ΔT has a value which is outside said range of expected values. More in particular, the electronic monitoring circuit4comprises the following components, which are electrically connected as shown inFIG.2:a first RC circuit;a second RC circuit;a resistor4-6;a resistor4-7;a differential amplifier4-5;a feedback resistor4-9;a resistor4-8;a resistor4-10;a capacitor4-11;a resistor4-12;a resistor4-13. The first RC circuit (i.e., a first RC branch) comprises the connection in series of a resistor4-3and of a first capacitor4-1, in which said series connection is connected between the input terminal and the ground reference voltage. During a transitory phase between the instant (t10or t40) at which the variation ΔT+/ΔT− of the pulse width (or of its duty cycle) of the control signal S_pwm_ctrl begins and the final instant (t12or t41) at which the full-speed value of said variation ΔT+/ΔT− of the pulse width (or of its duty cycle) of the control signal S_pwm_ctrl is reached, the first RC circuit has the function of detecting the trend over time of the variation ΔT+/ΔT− of the pulse width (or of its duty cycle) of the control signal S_pwm_ctrl with a first time constant τ1=R1×C1, by detecting the trend of the voltage drop at the ends of the capacitor C1. The second RC circuit (i.e., a second RC branch) comprises the connection in series of a resistor4-4and a second capacitor4-2, in which said series connection is connected between the input terminal and the ground reference voltage. Similarly, during the same transitory phase between the instant (t10or t40) at which the variation ΔT+/ΔT− of the pulse width (or its duty cycle) of the control signal S_pwm_ctrl begins and the final instant (t12or t41) at which the full-speed value of said variation ΔT+/ΔT− of the pulse width (or its duty cycle) of the control signal S_pwm_ctrl is reached, the second RC circuit has the function of detecting the trend over time of the variation ΔT+/ΔT− of the pulse width (or of its duty cycle) of the control signal S_pwm_ctrl with a second time constant τ2=R2×C2(different from the first time constant τ1=R1×C1), by detecting the trend of the voltage drop at the ends of the capacitor C2. The output of the first RC circuit is the voltage drop at the ends of the first capacitor4-1(i.e., the terminal voltage common to the resistor4-3and the capacitor4-1) and the output of the first RC circuit is connected to the first input terminal of the differential amplifier4-5by means of the resistor4-6. The output of the second RC circuit is the voltage drop at the ends of the second capacitor4-2(i.e., the terminal voltage common to the resistor4-4and the second capacitor4-2) and the output of the second RC circuit is connected to the second input terminal of the differential amplifier4-5by means of the resistor4-7. The set of the first RC circuit, of the second RC circuit and of the differential amplifier4-5have the function of processing, during a transitory phase, the variation ΔT+/ΔT− of the pulse width (or of its duty cycle) of the control signal S_pwm_ctrl with different timings between the first RC circuit and the second RC circuit, in order to effectively catch (during the transitory phase) the difference of the variation of the pulse width (or of its duty cycle) detected by means of the first and second RC circuits as it occurs in a short interval of time (in the order of hundreds of microseconds), according to the following two possible solutions:the value of the resistance R1of the resistor4-3of the first RC circuit is equal to the value of the resistance R3of the resistor4-4of the second RC circuit, while the value of the capacitance C1of the first capacitor4-1is different from the value of the capacitance C2of the second capacitor4-2(for example, R1=R3=10 Kilo Ohm, C1=5 nano Farad, C2=560 nano Farad): in this case the variation of the pulse width (or of its duty cycle) of the pulse-width modulation control signal S_pwm_ctrl is proportional (in the transitory phase) to the difference between the values of the capacitances C1and C2of the capacitors4-1and4-2;the value of the resistance R1of the resistor4-3is different from the value of the resistance R3of the resistor4-4, while the value of the capacitance C1of the first capacitor4-1is equal to the value of the capacitance C2of the second capacitor4-2: in this case the variation of the pulse width (or of its duty cycle) of the pulse-width modulation control signal S_pwm_ctrl is proportional (in the transitory phase) to the difference between the values of the resistances R1and R3of the resistors4-3and4-4. In both of the above solutions, the greater the variation in time of the duty cycle, the greater the difference between the voltage generated as output from the first RC circuit and the voltage generated as output from the second RC circuit during the transitory phase. The resistor4-6is connected between the terminal common to the first capacitor4-1and to the resistor4-3and the first input terminal of the differential amplifier4-5. The resistor4-7is connected between the terminal common to the second capacitor4-2and to the resistor4-4and the second input terminal of the differential amplifier4-5. The value of the resistance R9of the resistor4-6is for example equal to 1 Kilo Ohm and the value of the resistance R11of the resistor4-7is for example equal to 1 Kilo Ohm. The differential amplifier4-5is supplied with a first supply voltage VCC1, equal for example to 5 Volts. The differential amplifier4-5comprises a first input terminal connected to the resistor4-6, comprises a second input terminal connected to the resistor4-7, and comprises an output terminal adapted to generate an amplified voltage signal V_ampl as a function of the difference between the voltage of the first and of the second input terminals. The differential amplifier4-5has the function of amplifying the difference between the voltage of the first and of the second input terminals. The differential amplifier4-5is made for example as an operational amplifier. The resistor4-8is connected between a reference voltage V_ref (e.g., equal to 1.65 Volts) and the first input terminal of the differential amplifier4-5. For example, the value of the resistance R28of the resistor4-8is equal to 100 Kilo Ohm. The feedback resistor4-9is connected between the second input terminal and the output terminal of the differential amplifier4-5. The value of the resistance R29of the feedback resistor4-9is for example equal to 100 Kilo Ohm. The resistor4-10comprises a first terminal connected to the output terminal of the differential amplifier4-5. For example, the value of the resistance R12of the resistor4-10is equal to 200 Ohms. The capacitor4-11is connected between the second terminal of the resistor4-10and the ground reference voltage. The value of the capacitance C3of the capacitor4-11is for example equal to 220 nano Farad. The resistor4-12is connected between the second terminal of the resistor4-10and the output terminal. The resistor4-13is connected between the second terminal of the resistor4-12and the ground reference voltage. A Zener diode4-15is connected between a second supply voltage VCC2and the terminal common to the resistors4-10and4-12. Preferably, the electronic monitoring circuit4comprises a diode4-14interposed between the input terminal and the resistors4-3,4-4; the diode4-14comprises the anode terminal connected to the input terminal of the electronic monitoring circuit4and the cathode terminal connected to the resistor4-6and to the resistor4-7. Note thatFIG.2shows an electronic monitoring circuit4in which there are two RC circuits, in order to detect (during the transitory phase) the difference between the variation (measured by the voltage drop at the ends of the first capacitor4-1of the first RC circuit) of the pulse width (or of its duty cycle) of the control signal S_pwm_ctrl and the variation (measured by the voltage drop at the ends of the second capacitor4-2of the second RC circuit) of the pulse width (or of its duty cycle) of the control signal S_pwm_ctrl. However, the invention can also be implemented using a single RC circuit (for example, the connection in series of the resistor4-3and of the capacitor4-1, i.e., without the capacitor4-2and the resistor4-4), thus detecting the absolute value at full speed of the variation ΔT+ (increase) or ΔT− (decrease) of the pulse width (or of its duty cycle) of the control signal S_pwm_ctrl. It should also be noted that for the purpose of the explanation of the invention, a differential amplifier4-5has been considered, but other electronic components can also be used. It should also be noted that for the purpose of the explanation of the invention, only one electronic circuit2under test has been considered, but more generally the invention is also applicable to two or more electronic circuits under test (i.e., to two or more portions of the same electronic circuit): in this case the two or more circuits (or two or more circuit portions) are activated in sequence and for each of them the variation of the pulse width (or of its duty cycle) of the respective pulse-width modulation control signals is measured under respective defined operating conditions (i.e., known). Note that the invention can also be made entirely in software, using a microprocessor instead of the electronic monitoring circuit4, provided that a microprocessor with a sufficiently high computing power is available which is able to catch, in each period of the control signal S_pwm_ctrl, the variation of the pulse width (or of its duty cycle) of the pulse-width modulation control signal S_pwm_ctrl.
24,671
11860219
DETAILED DESCRIPTION OF THE EMBODIMENTS In order to make purposes, technical solutions and advantages of embodiments of the present disclosure more explicit, the technical solutions in the embodiments of the present disclosure will be described clearly and completely below in conjunction with the accompanying drawings in the embodiments of the present disclosure. Obviously, the described embodiments are some embodiments of the present disclosure, not all of them. Based on the embodiments of the present disclosure, all other embodiments obtained by those skilled in the art without creative works shall fall within the protection scope of the present disclosure. The terms “first” and “second” are only used for a purpose of description, and cannot be considered as indicating or implying relative importance or implicitly indicating the number of indicated technical features. It should be understood that, in the present disclosure, “including” and “having” and any variations of them are intended to cover non-exclusive inclusion. Depending on the context, “if” as used herein can be interpreted as “in the case of” or “when” or “in response to determination” or “in response to detection”. The technical solutions of the present disclosure will be described in detail below with specific embodiments. The following specific embodiments can be combined with each other, and the same or similar concepts or processes may not be repeated in some embodiments. A metal mesh sensor uses metal meshes as an electrical conductive function layer of a sensor, to achieve sensing and driving functions. In a manufacturing process of a touch panel using a metal mesh sensor, electrical detection of metal meshes therein is an important step.FIG.1ais a schematic diagram of a structure of a metal mesh. The diamond blocks formed by meshes inFIG.1illustrate touch units arranged in row directions and column directions of metal meshes10and channels formed by the touch units. InFIG.1a, the touch units on column direction channels are selected with dashed line frames, parts of the touch units along the dashed lines are main paths of the column direction channels, and the remaining parts are branch paths. Wherein, a branch path residue is shown inFIG.1a.FIG.1bis a schematic diagram of a microstructure of the metal meshes shown inFIG.1a. As shown inFIG.1b, all the touch units in the metal meshes are composed of hollowed-out wirings in the same square shape, and discontinuous fractures in square structures form gaps between unconnected touch units inFIG.1b. InFIG.1a, the touch units in a row direction constitute a row direction channel, and the touch units in a column direction constitute a column direction channel. At present, the metal meshes10usually include a plurality of row direction channels and a plurality of column direction channels insulated and intersected with the row direction channels; both the row direction channels and the column direction channels include a plurality of touch units connected with each other. The intersections between the row direction channels and the column direction channels are insulated and overlapped, and smallest repeating units on the row direction channels and the column direction channels are the touch units. As shown inFIG.1b, hollowed-out wirings for sensing electrical signals are arranged in each of the touch units, and hollowed-out portions provide gaps for light emission of pixels below. A metal used in the row direction channels and the column direction channels in the metal mesh10may be, for example, titanium-aluminum-titanium, or molybdenum, and there is no limitation on this in the present disclosure. In use of the metal mesh sensor, if a disconnection defect or a residual defect separately exists in a row direction channel and a column direction channel of the metal meshes, or a short-circuit occurs between adjacent channels, touch effects will be affected. Where the disconnection defect includes a main path disconnection and a branch path disconnection. At present, the non-contact electrical detection technology in the related art usually uses cutting-type scanning row by row, it can detect an abnormity of a main path of a channel; however, when a branch path of a channel is abnormal, since the entire channel can still be conductive, its defect cannot be detected. And, existing detection devices can only position the detection at a channel level, and positioning of the defect is not accurate enough. In order to improve the detection reliability and accuracy for a touch panel, the present disclosure discloses a device for detecting a touch panel, the detection of each of the plurality of touch units is achieved through capacitive coupling at least one small-sized signal transceiving component with the metal mesh, and thus utilizing a detection signal of an induction capacitor. Various implementations of the present disclosure will be introduced and exemplified below in combination with the accompanying drawings and specific embodiments. FIG.2is a schematic diagram of a structure of a device for detecting a touch panel provided by an embodiment of the present disclosure. A device20for detecting a touch panel shown inFIG.2is used for detecting the metal meshes10(seeFIG.4or5) on the touch panel, wherein the device20for detecting a touch panel includes: a signal transceiving component21and a defect detecting unit22. The signal transceiving component21is configured to be capacitively coupled with the metal meshes10, so as to obtain a detection signal when passing through a detection area, the signal transceiving component is movably arranged relative to the metal meshes, a comprehensive detection of the metal meshes is achieved through a movement of the signal transceiving component. Where a coupling distance between the signal transceiving component21and the metal meshes may be in a range of 10 μm±10 μm, specifically, the capacitive coupling distance can be adjusted according to an actual condition. Specifically, the signal transceiving component21includes a signal generating unit211and a signal receiving unit212. The signal generating unit211is configured to send out an original signal, and the original signal generates a detection signal on a metal mesh10; the signal receiving unit212is configured to receive the detection signal at a fixed distance from the signal generating unit211on the metal mesh10. SeeFIG.3,FIG.3is a schematic diagram of detection principle of a device for detecting a touch panel provided by an embodiment of the present disclosure. As shown inFIG.3, when both the signal generating unit211and the signal receiving unit212are arranged above the metal mesh of the touch panel to be detected, a distance between an emitting surface of the signal generating unit211and the metal mesh to be detected is d1, and a distance between a receiving surface of the signal receiving unit212and the metal mesh to be detected is d2. Therefore, when a high-frequency signal is applied between the signal generating unit211and the signal receiving unit212, the emitting surface of the signal generating unit211and the metal mesh to be detected can approximately form a plate capacitor C1, the receiving surface of the signal receiving unit212and the metal mesh to be detected can approximately form a plate capacitor C2. The emitting surface of the signal generating unit211and the receiving surface of the signal receiving unit212serves as one of plates of a corresponding plate capacitor, and can be made of a metal material. Based on the working principle of a capacitor loading a high-frequency signal, by loading a high-frequency voltage signal to the emitting surface of the signal generating unit211, the high-frequency voltage signal can pass through the plate capacitor C1, the metal mesh to be detected and the plate capacitor C2, and finally be received by the receiving surface of the signal receiving unit212. An object to be detected shown inFIG.3is the metal mesh through which the high-frequency voltage signal passes. If the metal mesh to be detected has no defect position, the detection signal received by the signal receiving unit212is consistent with the fluctuation law of the waveform of the high-frequency voltage signal loaded by the signal generating unit211. FIG.4is a schematic diagram of a signal transceiving component detecting a metal mesh provided by an embodiment of the present disclosure. As shown inFIG.4, the signal generating unit211and the signal receiving unit212are configured to be arranged on the same side of the metal meshes in preset relative positions, and send and receive signals through capacitive coupling with the metal meshes10, respectively. It can be seen fromFIG.4that a layout area of the signal generating unit211and the signal receiving unit212only covers a partial of metal meshes, and only situations of the metal meshes within the layout area can be determined; in order to achieve a comprehensive detection of the metal meshes, the signal generating unit211and the signal receiving unit212provided by the present disclosure are movably arranged relative to the metal mesh. In this way, the preset relative positions can be considered that the relative positions are fixed, which requires that the signal generating unit211and the signal receiving unit212move synchronously during movement scanning detection, so as to reduce the influence of the a relative position change on the detection accuracy. During the movement scanning detection, for example, the signal transceiving component21is movably arranged above the metal mesh to be detected, maintains a predetermined gap with the metal mesh to be detected, and, forms, after the signal transceiving component21is loaded with a stable high-frequency signal, a capacitive coupling path with the metal mesh to be detected below. A portion of the metal mesh through which the capacitive coupling path passes is the metal mesh to be detected as shown inFIG.3. The signal transceiving component21is controlled to move along the row direction of the metal mesh to be detected (the X direction shown inFIG.4), and collects the detection signal received by the signal receiving unit212during the movement. When the signal transceiving component21moves to or is close to a defect position, a waveform of the detection signal will be abnormal. The waveform of the detection signal can be used to determine whether the touch unit on the metal mesh to be detected has a defect, and determine a position of the defect. The metal mesh10itself is not provided with an electrical signal additionally, but to obtain an electrical signal by coupling with an original signal sent by the signal generating unit211, the electrical signal then passes through an electrically connected line, and subsequently is received by the signal receiving unit212. The signal transceiving component21and the metal mesh10can be simply understood as a capacitor structure. Due to a principle that a size of a capacitance value is directly proportional to the area of a sensing plate, the larger the area of the capacitance plate, the larger the capacitance value, which can be understood as the amplitude of the electrical signal is greater. If there is a disconnection defect (including a main path disconnection defect and a branch path disconnection defect) in the metal mesh10in a detection area between the signal generating unit211and the signal receiving unit212, a coupling area will be less than a normal coupling area, and the amplitude of the detection signal received by the signal receiving unit212will be smaller than a normal amplitude. And, if there is a residual defect (including a residual defect that causes a short-circuit of adjacent channels and a residual defect that does not affect electrical performance) in the metal mesh10in a detection area between the signal generating unit211and the signal receiving unit212, the coupling area will be greater than the normal coupling area, and the amplitude of the detection signal received by the signal receiving unit212will be greater than a normal amplitude. In this way, a non-contact detection of the metal mesh can be achieved. The defect detecting unit22connected with the signal transceiving component21is used for detecting each of the plurality of touch units in the metal mesh10based on a detection signal received by the signal receiving unit212. In a detection manner shown inFIG.3, the signal transceiving component21may use the touch unit as the smallest detecting unit. The signal transceiving component21uses the touch unit as a minimum moving distance unit, and a center line for detecting is, for example, coincident with a center line of a channel where the touch unit to be detected is located, so as to increase a coverage area and detection accuracy of the signal transceiving component21to the detected touch unit. The defect detection unit22analyzes the detection signal received by the signal receiving unit212, so as to determine whether the touch unit corresponding to the detection signal is defective and defect types that may exist. A specific analysis method can be seen from subsequently listed examples of the defect detecting unit22. The present embodiment provides a device20for detecting a touch panel, and the device is used for detecting the metal meshes10on the touch panel, wherein the row direction channels and the column direction channels on the metal meshes10are composed of touch units which are connected with each other. In the device20for detecting a touch panel, the signal transceiving component21includes a signal generating unit211and a signal receiving unit212; the signal generating unit211and the signal receiving unit212are configured to be arranged on the same side of the metal meshes10in preset relative positions, and send and receive signals through capacitive coupling with the metal meshes10, respectively; the defect detecting unit22is configured to detect each of the plurality of touch units in the metal meshes10based on a detection signal received by the signal receiving unit212, thereby achieving a detection of a defect of the metal meshes of the touch panel, positioning of the defect in the touch unit, and improving a detection reliability. In order to improve the detection accuracy and reliability of the signal transceiving component21in the embodiment shown inFIG.4, the following are some optional embodiments of sizes of the signal transceiving component21. In some embodiments, a detection width of the signal transceiving component21is greater than a size of each of the plurality of touch units in a width direction, and is less than or equal to a size of three touch units in the width direction, wherein the detection width is a size of a detection area of the signal transceiving component21in the width direction, and the width direction is a direction perpendicular to a detection direction. Taking a scanning detection in the X direction inFIG.4as an example, the detection direction is the X direction, and the width direction is the Y direction. The detection width can be a maximum distance in the Y direction that the signal transceiving component21can detect. Herein, the detection area can be specifically obtained by an experiment. By limiting the detection width, it is ensured that the signal transceiver component21can span between two channels or three channels. In this way, it is possible to detect whether there is a short-circuit defect between a touch unit and an adjacent channel. In some implementations of the present embodiment, the width of the signal generating unit211is a size of one touch unit in the width direction; a width of the signal receiving unit212is greater than a size of each of the plurality of touch units in the width direction, and is less than or equal to a size of three touch units in the width direction. In some embodiments, a detection length of the signal transceiving component21is greater than or equal to a size of each of the plurality of touch units in a length direction; wherein the detection length is a size of a detection area of the signal transceiving component21in the length direction, and the length direction is a direction consistent with a detection direction. Taking a scanning detection in the X direction inFIG.4as an example, the detection direction and the length direction are the X direction. The detection length can be a maximum distance in the X direction that the signal transceiving component21can detect. For example, In an embodiment where the signal transceiving component21includes one signal generating unit211and one signal receiving unit212, the detection length of the signal transceiving component21can be understood as a distance in the X direction coupling with the one signal generating unit211and the one signal receiving unit212on metal meshes10, respectively. Herein, the detection area can be specifically obtained by an experiment. The detection length is limited so that the detection signal of the signal transceiving component21can be used to detect at least one complete touch unit every time, and each of the plurality of touch units is at least partially detected repeatedly, thereby improving the reliability and accuracy of detection. In some embodiments, a plurality of the signal transceiving components21can be arranged to scan and detect multiple rows/columns at the same time, for example, the device for detecting a touch panel includes a plurality of the signal transceiving components21and the plurality of the signal transceiving components21are arranged in a width direction; wherein the width direction is a direction perpendicular to a detection direction. In order to improve the accuracy of detection, each signal transceiving component21is used to detect the row direction channels or the column direction channels, detection areas of the plurality of the signal transceiving components21are distributed at equal intervals, and a distance between centers of two adjacent signal transceiving components21is a size of N touch units in the width direction, the N is an integer greater than or equal to 2. For example, when detecting in the X direction, the distance between centers is equal to the size of two touch units in the width direction, and a circuit that detects the X direction needs to be scanned at least 3 times to ensure the detection accuracy and high positioning accuracy. Through synchronous detection of the plurality of the signal transceiving components21, the detection efficiency is improved. Furthermore, by limiting the distance between centers, an integer multiple of the number of detections can cover exactly all areas, which further improves the reliability of detection. In the above embodiment, the signal transceiving component21can include one signal generating unit211and one signal receiving unit212, but the present embodiment is not limited thereto. Furthermore, the signal transceiving component21can be arranged as including one signal generating unit211and two signal receiving units212.FIG.5is a schematic diagram of a structure of a signal transceiving component with two signal receiving units provided by an embodiment of the present disclosure. As shown inFIG.5, the two signal receiving units212are arranged along a detection direction, the signal generating unit211is arranged between the two signal receiving units212, and distances form the signal generating unit211to the two signal receiving units212are the same. By arranging one signal receiving unit212at the front and the back of the detection direction, respectively, and detecting two detection areas in front of and behind the signal generating unit211at the same time, the repeated detection of overlapped areas is implemented during a scanning detection process, thereby improving the comprehensiveness and reliability of detection. Continuing to refer to the embodiment shown inFIG.5, optionally, a position where a single signal generating unit211is capacitively coupled with the metal mesh10and a position where the two signal receiving units212are capacitively coupled with the metal meshes10are aligned with centers of row direction channels or column direction channels of the metal meshes10in the detection direction. By making the signal generating unit211and the two signal receiving units212in a straight line, the accuracy of detection and positioning is further improved. FIG.6is a schematic diagram of a structure of another device for detecting a touch panel provided by an embodiment of the present disclosure. On the basis of the above various embodiments, in some embodiments, the signal transceiving component21can include a signal generating unit211and a signal receiving unit212. The defect detecting unit22connected with the signal transceiving component21can specifically include a signal comparison unit221and a processing unit222, as shown inFIG.6. In some embodiments shown inFIG.6, the signal comparison unit221is configured to obtain the detection signal from the signal receiving unit212, and determine an abnormal wave band based on a change of a waveform of the detection signal.FIG.7shows examples of comparison of waveforms of detection signals corresponding to several types of defects provided by an embodiment of the present disclosure. In the waveforms shown inFIG.7, a normal waveform should be a sine wave, and wave bands with increased and decreased amplitude are abnormal wave bands with defects in corresponding positions. The signal waveform is a waveform of a time domain, and a detection position corresponding to each wave band is determined; a touch unit with a defect can be determined by calculating a moving position of the signal transceiving component21. Continuing to refer to the processing unit222shown inFIG.6, it is used for determining the touch unit with a defect in the metal mesh10based on a detection position of the signal transceiving component21at a time corresponding to the abnormal wave band, and determining whether an amplitude of the abnormal wave band is large than an average amplitude of multiple adjacent wave bands. (1) SeeFIG.7, if it is determined that the amplitude of the abnormal wave band is greater than the average amplitude of multiple adjacent wave bands, the processing unit222will determine whether an amplitude variance of the abnormal wave band and the multiple adjacent wave bands is greater than a preset threshold: if yes, it is determined that a defect type is that the touch unit corresponding to the detection position has a residual defect; if not, it is determined that the defect type is that the touch unit corresponding to the detection position has a short-circuit defect with an adjacent channel. (2) SeeFIG.7, if it is determined that the amplitude of the abnormal wave band is less than the average amplitude of multiple adjacent wave bands, the processing unit222will determine whether the amplitude of the abnormal wave band is 0: if it is determined that the amplitude of the abnormal wave band is 0, it is determined that the defect type is that there is a main path disconnection defect in the touch unit corresponding to the detection position; if it is determined that the amplitude of the abnormal wave band is not 0, it is determined that the defect type is that there is a branch path disconnection defect in the touch unit corresponding to the detection position. As a result, the device20for detecting a touch panel in the present embodiment can distinguish the residual defect from the short-circuit defect of adjacent channels in the touch unit, and can also distinguish the main path disconnection defect from the branch path disconnection defect, having relatively high defect detection capacity, and improving the detection reliability for the touch panel. In an embodiment where the signal transceiving component21includes one signal generating unit211and two signal receiving units212, the implementation principle thereof can refer to the embodiment where the signal transceiving component21includes one signal generating unit211and one signal receiving unit212; and there may be two defect detecting units22, which respectively analyze the detection signals of the two signal receiving units212, the effect of its implementation principle is similar to the above, and will not be repeated herein. In the above embodiment, the signal transceiving component21may be composed of one signal generating unit211and one signal receiving unit212, or may be composed of one signal generating unit211and a plurality of the signal receiving units212. In an embodiment with a plurality of the signal receiving units212, in addition to the signal generating unit211and the signal receiving unit212, the signal transceiving component may further include two auxiliary receiving units213.FIG.8is a schematic diagram of a structure of a signal transceiving component with two auxiliary receiving units provided by an embodiment of the present disclosure.FIG.9is a schematic diagram of a structure of another signal transceiving component with two auxiliary receiving units provided by an embodiment of the present disclosure. InFIG.8andFIG.9, solid line diamonds indicate the touch units connected in the row direction channels, and dotted line diamonds indicate the touch units connected in the column direction channels. The structure of the auxiliary receiving units213here is the same as that of the signal receiving unit212, and both are for receiving the detection signal on the metal meshes10, but the main function of the auxiliary receiving units213is to provide an auxiliary reference for determining the defect type. As shown inFIG.8, the two auxiliary receiving units213are symmetrically arranged on both sides of the signal generating unit211in a direction perpendicular to the detection direction, and a detection distance between the two auxiliary receiving units213is less than or equal to a size of two touch units in a detection width. The detection distance of the auxiliary receiving units213is a distance that can be detected in the direction perpendicular to the detection direction. Where the detection distance is a distance between the two auxiliary receiving units213and the position capacitively coupled with the metal mesh. In a specific implementation, the detection width of the signal receiving unit212in the present embodiment is the width of one touch unit; the detection width of one auxiliary receiving unit213is less than the width of one touch unit. The position layout is as follows: the signal receiving unit212and the signal generating unit211are respectively in the middle of a channel to be detected, and the two auxiliary receiving units213are on both sides of the signal receiving unit212and on center lines of channels on both sides. The signal transceiving components shown inFIG.8andFIG.9are all layouts when detecting in the row direction, and the signal transceiving component includes two signal receiving units212, two auxiliary receiving units213and one signal generating unit211. The metal mesh is capacitively coupled with two opposite units, and a distance between the edges of two coupling areas away from the signal generating unit is the detection distance of the two opposite units. The two opposite units can be two signal receiving units212or two auxiliary receiving units213. Where the detection distance between the two signal receiving units212forms a detection length of the signal transceiving component.FIG.8shows that the detection length of the signal transceiving component is 3 touch units. And,FIG.9shows that the detection length of the signal transceiving component is 5 touch units. The detection distance between the two auxiliary receiving units213forms the detection width of the signal transceiving component. BothFIG.8andFIG.9show that the detection width of the signal transceiving component is 3 touch units. The abovementioned length direction is the detection direction, and the width direction is a direction perpendicular to the detection direction.FIG.8andFIG.9are only schematic diagrams of the detection length and detection width of the signal transceiving component, and the present disclosure is not limited thereto. If the touch unit is short-circuited with its adjacent previous row/column channel, the amplitude of the waveform set on the auxiliary receiving unit213arranged close to the previous row/column channel will increase; similarly, if the touch unit is short-circuited with its adjacent next row/column channel, the amplitude of the waveform set on the auxiliary receiving unit213arranged close to the next row/column channel will increase. By arranging two auxiliary receiving units213in a vertical direction of the detection direction, the accuracy and reliability of detection of the short-circuit defect of adjacent channels can be improved. For the embodiments with the auxiliary receiving units213shown inFIG.8andFIG.9, the used defect detecting unit (not shown), whose structure is similar to the structure of the defect detecting unit22shown inFIG.6, can also include a signal comparison unit221and a processing unit222, but the functions of the units are different. In an embodiment with the auxiliary receiving unit213, the signal comparison unit221is configured to obtain the detection signal from the signal receiving unit212, obtain a first auxiliary signal and a second auxiliary signal from the two auxiliary receiving units213, and determine an abnormal wave band based on a change of a waveform of the detection signal; and, respectively determine a first auxiliary wave band and a second auxiliary wave band in waveforms of the first auxiliary signal and the second auxiliary signal based on a detection time corresponding to the abnormal wave band. When the amplitude of the waveform increases, the use of the first auxiliary signal and the second auxiliary signal can assist in improving the accuracy of determining the defect type. The processing unit222shown inFIG.6is configured to determine the touch unit which is defective in the metal mesh based on a detection position of the signal transceiving component21at a time corresponding to the abnormal wave band, and determine whether an amplitude of the abnormal wave band is greater than an average amplitude of multiple adjacent wave bands. (1) Continuing to refer toFIG.7, if it is determined that the amplitude of the abnormal wave band is greater than the average amplitude of multiple adjacent wave bands, it is determined whether the amplitude of the first auxiliary wave band or the second auxiliary wave band is greater than 0: if yes, it is determined that the defect type is that the touch unit corresponding to the detection position has a short-circuit defect with an adjacent channel; if not, it is determined that the defect type is that the touch unit corresponding to the detection position has a residual defect. In some cases, the residual defect in a single channel, for example, residues on the main or branch path of the touch unit, may not affect the electrical performance of the touch panel. However, a residual part leads to a reduction of a local light-passing area, the light emitted from pixels below is blocked, causing a local dark spot and affecting the display effect. Therefore, it is very important to detect the residual defect on the main or branch path of the touch unit. By the auxiliary receiving unit213, the accuracy of detection of the short-circuit defect between the touch unit and the adjacent channel and the residual defect of the touch unit is improved, and the detection of the short-circuit defect of adjacent channels has higher accuracy and reliability. (2) Continuing to refer toFIG.7, if it is determined that the amplitude of the abnormal wave band is less than the average amplitude of multiple adjacent wave bands, it is determined whether the amplitude of the abnormal wave band is 0: if it is determined that the amplitude of the abnormal wave band is 0, it is determined that the defect type is that there is a main path disconnection defect in the touch unit corresponding to the detection position; if it is determined that the amplitude of the abnormal wave band is not 0, it is determined that the defect type is that there is a branch path disconnection defect in the touch unit corresponding to the detection position. On the basis of the above various embodiments, the present disclosure further provides a system for detecting a touch panel, including a movement device and the device20for detecting a touch panel described in any of the above embodiments. Where the movement device is configured to control the signal transceiving components21to move along center lines of the row direction channels and the column direction channels of the metal mesh10, and send positioning information indicating detection position of the signal transceiving component21to the defect detecting unit22. Herein, for example, the movement device can be a movement device with an X movement axis and a Y movement axis, the device20for detecting a touch panel is driven by the X movement axis to detect the metal meshes10on the touch panel in the X direction, and then the device20for detecting a touch panel is driven by the Y movement axis to detect the metal meshes10on the touch panel in the Y direction. For example, the movement device can include a carrier platform and a movement arm matched with the carrier platform, and after the movement device controls the signal transceiving component21to perform a detection in the X direction, the carrier platform carrying the metal meshes can be rotated by 90 degrees, and then a detection in the Y direction can be performed. It is also possible that the movement arm controls the signal transceiving component21to change from scanning in the X direction to scanning in the Y direction. The implementation manner of the movement device is not limited herein. Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the present disclosure, without limitation to the technical solutions. Although the present disclosure has been described in detail with reference to the foregoing embodiments, those skilled in the art should understand: modifications to the technical solutions described in the foregoing embodiments, or equivalent substitutions of some or all of the technical features therein can still be made. However, these modifications or substitutions shall not make the essential of corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present disclosure.
34,811
11860220
DETAILED DESCRIPTION For the ease of understanding the disclosure, the disclosure is described more completely with reference to related drawings. Preferred examples of the disclosure are given in the drawings. However, the disclosure may be implemented in many different forms and is not limited to the examples described herein. Conversely, these examples are provided to make the disclosures of the disclosure more thorough and complete. Unless otherwise defined, all technical and scientific terms used herein have a same meaning generally understood by a person skilled in the art to which the disclosure belongs. The terms used in the specification herein are merely to describe the specific examples, rather than to limit the disclosure. The term “and/or” used herein includes one associated item that is listed or any or all possible combinations of associated items that are listed. The disclosure provides a method for evaluating an HCI effect of a device. As shown inFIG.1, the method may specifically include the following steps: In block S10: a ratio of a substrate current to a drain current of a first device at different gate-source voltages is acquired, and recorded as a first current ratio. In block S20: a ratio of a substrate current to a drain current of a second device at different gate-source voltages is acquired, and recorded as a second current ratio, wherein the second device is subjected to a process parameter adjustment or a device parameter adjustment relative to the first device. In block S30: an influence of the process parameter adjustment or the device parameter adjustment on an HCI effect of the device is determined based on the second current ratio and the first current ratio. In an optional embodiment, block S10may specifically include the following operations. In S101: the substrate current Isub1of the first device at the different gate-source voltages Vgs1is acquired. In S102: the drain current Ids1of the first device at the different gate-source voltages Vgs1is acquired. In S103: the substrate current Isub1and the drain current Ids1of the first device at each gate-source voltage Vgs1are divided to obtain the first current ratio Isub1/Ids1. Specifically, in a process of acquiring the ratio of the substrate current Isub1to the drain current Ids1of the first device at the different gate-source voltages Vgs1, a source-drain voltage Vds1between a source and a drain of the first device is set as VCC, both the source and a substrate of the first device are grounded, and the source-drain voltage Vds1of the first device is not higher than 3 V. The gate-source voltage Vgs1between a gate and the source of the first device is between 0 V and VCC. In the process of acquiring the ratio of the substrate current Isub1to the drain current Ids1of the first device at the different gate-source voltages Vgs1, the gate-source voltage Vgs1of the first device rises gradually from 0 V, till being equal to the source-drain voltage Vds1of the first device, and the gate-source voltage Vgs1rises gradually in a step size of 0.01 V to 0.1 V. In other optional examples, the gate-source voltage Vgs1may rise gradually in the step size of 0.01 V, 0.05 V or 0.1 V. In the examples, the gate-source voltage Vgs1may rise gradually in the step size of 0.05 V. In the process of gradually rising the gate-source voltage Vgs1, the substrate current Isub1and the drain current Ids1at each gate-source voltage Vgs1are collected. In an optional example, block S20may specifically include the following operations. In S201: the substrate current Isub2of the second device at the different gate-source voltages Vgs2is acquired. In S202: the drain current Ids2of the second device at the different gate-source voltages Vgs2is acquired. In S203: the substrate current Isub2and the drain current Ids2of the second device at each gate-source voltage Vgs2are divided to obtain the second current ratio Isub2/Ids2. Specifically, the second device is adjusted in a process parameter or a device parameter relative to the first device. In the process of acquiring the ratio of the substrate current Isub2to the drain current Ids2of the second device at the different gate-source voltages Vgs2, a source-drain voltage Vds2between a source and a drain of the second device is set as VCC, both the source and a substrate of the second device are grounded, and the source-drain voltage Vds2of the second device is not higher than 3 V. The gate-source voltage Vgs2between a gate and the source of the second device is between 0 V and VCC. In the process of acquiring the ratio of the substrate current Isub2to the drain current Ids2of the second device at the different gate-source voltages Vgs2, the gate-source voltage Vgs2of the second device rises gradually from 0 V, till being equal to the source-drain voltage Vds2of the second device, and the gate-source voltage Vgs2rises gradually in a step size of 0.01 V to 0.1 V. In other optional examples, the gate-source voltage Vgs2may rise gradually in the step size of 0.01 V, 0.05 V or 0.1 V. It is to be noted that the gate-source voltage Vgs2of the second device has the same value range as the gate-source voltage Vgs1of the first device; and the step size in which the gate-source voltage Vgs2of the second device gradually rises is the same as the step size in which the gate-source voltage Vgs1of the first device gradually rises. S30may specifically include the following steps. In S301: a curve showing the change of the first current ratio Isub1/Ids1over the gate-source voltage Vgs1of the first device is acquired. In S302: a curve showing the change of the second current ratio Isub2/Ids2over the gate-source voltage Vgs2of the second device is acquired. In S303: the influence of the process parameter adjustment or the device parameter adjustment on the HCI effect of the device is determined based on the curve showing the change of the first current ratio Isub1/Ids1over the gate-source voltage Vgs1and the curve showing the change of the second current ratio Isub2/Ids2over the gate-source voltage Vgs2. Specifically, the curve which shows the change of the first current ratio Isub1/Ids1over the gate-source voltage Vgs1of the first device is made by taking the first current ratio Isub1/Ids1as a y axis, and the gate-source voltage Vgs1of the first device as an x axis; and then, in the same coordinate system, the curve which shows the change of the second current ratio Isub2/Ids2over the gate-source voltage Vgs2of the second device is made by taking the second current ratio Isub2/Ids2as a y axis, and the gate-source voltage Vgs2of the second device as an x axis. More specifically, as an example, where both the drain voltage Vd1of the first device and the drain voltage Vd2of the second device are 2.6 V, the curve 1) showing the change of the first current ratio Isub1/Ids1over the gate-source voltage Vgs1of the first device and the curve 2) showing the change of the second current ratio Isub2/Ids2over the gate-source voltage Vgs2of the second device are shown asFIG.2. It is to be noted, inFIG.2, both the first current ratio Isub1/Ids1and the second current ratio Isub2/Ids2are represented as a current ratio Isub/Ids, and both the gate-source voltage Vgs1of the first device and the gate-source voltage Vgs2of the second device are represented as a gate-source voltage Vgs. The method of S30is as follows: under a same gate-source voltage Vgs, if the second current ratio Isub2/Ids2is higher than the first current ratio Isub1/Ids1, it is determined that the process parameter adjustment or the device parameter adjustment made on the second device relative to the first device enhances the HCI effect of the device; and if the second current ratio Isub2/Ids2is lower than the first current ratio Isub1/Ids1, it is determined that the process parameter adjustment or the device parameter adjustment made on the second device relative to the first device reduces the HCI effect of the device. The technical features of the above examples may be combined freely. In order to describe briefly, the descriptions are not made on all possible combinations of the technical features of the examples. However, the combinations of these technical features should be considered as a scope of the specification as long as there is no conflict. The method for evaluating the HCI effect, when evaluating the HCI effect of the device, the influence of the adjusted process parameter on the HCI effect of the device is inferred by measuring the substrate current and the drain current of the device at different process parameters, thereby determining whether the HCI effect tends to become good or poor after changing the process parameter. Therefore, the method is able to quickly determine how to adjust the HCI effect, and is not necessary to know the specific service life, greatly shortens the time for evaluating the HCI effect, and accelerates the research and development process. The above examples only describe several implementation modes of the disclosure. The description is specific and detailed, but cannot be understood as a limit to a scope of the disclosure accordingly. It should be pointed out that multiple changes and improvements may further be made by a person skilled in the art without departing from a concept of the disclosure and they also belong to the protection scope of the disclosure. Therefore, the protection scope of the disclosure shall be subjected to the appended claims.
9,529
11860221
DETAILED DESCRIPTION OF THE INVENTION FIGS.1and2of the accompanying drawings illustrate an apparatus10, which is particularly suitable for full-wafer testing of microelectronic circuits of unsingulated wafers and/or burn-in testing of unsingulated wafers and/or built-in self-testing of unsingulated wafers. The apparatus10includes a frame12and a number of modules mounted to the frame12including a wafer loader14, a probing subassembly16, a cartridge18, a test head20, and a thermal system24. The frame12has a prober base portion26, a thermal system frame portion28, and a test head frame portion30. The thermal system frame portion28is pivotally mounted to the prober base portion26. The test head frame portion30is pivotally mounted to the thermal system frame portion28. The probing subassembly16and the cartridge18are mounted to lower and upper portions32and34respectively of the prober base portion26, the test head20and the thermal system24are mounted to the test head frame portion30and the thermal system frame portion28respectively. The thermal system frame portion28can, for example, be pivoted between a position as shown inFIG.1wherein the thermal system frame portion28is over the prober base portion26, and a position as shown inFIG.2wherein the pivot arm portion is pivoted approximately 45 degrees counterclockwise to the left. Pivoting of the thermal system frame portion28into the position shown inFIG.2moves the test head20away from the cartridge18. Access is thereby gained to the cartridge18for purposes of maintenance to or replacement of the cartridge18. As illustrated inFIG.3, the cartridge18includes a cartridge frame38, alignment pins40for aligning and locking the cartridge frame38in a fixed position, a contactor assembly42, a plurality of first connector sets44, and a plurality of flexible attachments46connecting the contactor assembly42to the first connector sets44. As shown inFIG.4, the contactor assembly42includes a distribution board48, a contactor board50and fasteners52that secure the contactor board50to the distribution board48. Distribution board48has a force distribution substrate55, a thermal expansion equalization substrate57, and an electrical distribution substrate54, a plurality of terminals56formed on the electrical distribution substrate54, a plurality of contacts58formed on the electrical distribution substrate54, and a plurality of conductors60carried within the electrical distribution substrate54. The terminals56and the contacts58are formed on the same side but on different areas of the electrical distribution substrate54. Each conductor60interconnects a respective one of the terminals56with a respective one of the contacts58. The contactor board50includes a contactor substrate62having first and second pieces64and66, a collar67, and a plurality of pins68. One end of each pin68is inserted through an opening in the first piece64, and then inserted through an opening in the second piece66. Each pin68has a central body that is larger than its ends so that it is held in place by the opening in the second piece66. The collar67is used to align the first and second pieces64and66relative to one another. One end of each pin68forms a contact70that is placed against a respective terminal56of the distribution board48. An opposing end of each pin68forms a terminal72that can touch a contact74on a wafer76. The fasteners52may, for example, be bolts, each having a shank that is inserted through an opening in the contactor substrate62, and thread on the shank is then screwed into a threaded opening in the electrical distribution substrate54. The electrical distribution substrate54, the contactor substrate62, force distribution substrate55, expansion equalization substrate57, and the fasteners52jointly form a support structure80with the terminals72extending from the support structure80. The pins68, terminals56, conductors60, and contacts58form conductive links to and from the terminals72. Each one of the flexible attachments46has a flexible nonconductive outer layer82, a plurality of conductors84held within the outer layer82and separated from one another by the material of the outer layer82, a plurality of open terminals86at ends of the respective conductors84, and a plurality of electrically conductive bumps88, each on a respective one of the terminals86. Each one of the conductive bumps88is placed against a respective one of the contacts58of the distribution board48. A clamp piece90is placed over an end of the flexible attachment46. Fasteners9are used to secure the clamp piece90to the electrical distribution substrate54and provide a force that clamps the end of the flexible attachment46between the clamp piece90and the electrical distribution substrate54. As further shown inFIG.5, the contacts58form an interface92. The interface92has two parallel rows of the contacts58. Two of the contacts58gare ground contacts that extend from one of the rows to the other and are located at opposing ends of the rows. Threaded openings94are formed on opposing ends of the interface92into the electrical distribution substrate54. Each one of fasteners91inFIG.4has a respective head and a respective threaded shank extending from the head. The head rests on the clamp piece90and the shank is screwed into one of the threaded openings94inFIG.5. A compliant member93is located between the clamp piece90and the flexible nonconductive outer layer82to distribute a force created by the clamp piece90to ensure uniform contact by the electrically conductive bumps88. Referring toFIG.6, the electrical distribution substrate54is square and has a periphery formed by four sides98. The contactor substrate62has a circular periphery100within the four sides98. A plurality of interfaces92such as the interface92ofFIG.5are provided on an area of the electrical distribution substrate54outside the circular periphery100. The locations and orientations of the interfaces92are selected to provide a relatively dense configuration. The combined length of all the interfaces92is more than the length of the circular periphery100. The combined length of the interfaces92is also more than the combined length of the sides98. The interfaces92in each respective quarter102,104,106and108are all aligned in the same direction. The interfaces92of the juxtaposed quarters102and106are each at an angle110of 45 degrees relative to a centerline112through the distribution substrate94. The interfaces of the juxtaposed quarters104and108are each at an angle114of 135 degrees relative to the centerline112as measured in the same direction as the angle110. Each one of the quarters102,104,106or108has ten of the interfaces92A to92J. The interfaces92C,92D, and92E are parallel to one another but at different distances from a center point116of the contactor substrate62. The interfaces92F,92G, and92H are parallel to one another but at different distances from the center point116. The interfaces92C and92F are in line with one another, as are the interfaces92D and92G and the interfaces92E and92H. The interfaces92B and92I are in line with one another but form a row that is closer to the center point116than the row formed by the interfaces92C and92F. The interfaces92B and92I are also spaced further from one another than the interfaces92C and92F. The interfaces92A and92J also form a row that is closer to the center point116than the row formed by the interfaces92B and92I. Each one of the quarters102,104,106, and108has an arrangement of ten of the interfaces92that is similar to the arrangement of interfaces92A to92J. The arrangement is rotated through 90 degrees about the center point116when moving from the quarter108to the quarter102. Similarly, the arrangement is rotated through another 90 degrees when moving from the quarter102to the quarter104, etc. A respective flexible attachment46is connected to each respective one of the interfaces92. The arrangement of the interfaces92allows for “fanning-in” or “fanning-out” of a large number of electrical paths to or from a relatively dense arrangement of the terminals72of the contactor board50. Referring again toFIG.3, the cartridge frame38includes a lower backing plate120, upper support pieces122, and connecting pieces124that mount the upper support pieces122to the backing plate120. The cartridge18further includes an actuator mechanism126for moving the contactor assembly42relatively with respect to the cartridge frame38, and a travel sensor128. FIG.7illustrates the actuator mechanism126, travel sensor128, and a wafer holder130holding a wafer76. A cylinder132is manufactured in the backing plate120. The cylinder132has an outer surface134and an upper surface138. A ring-shaped sliding piston140is inserted into the cylinder132. A lower surface of the piston140is attached to the support structure80. A fixed ring-shaped piston136is inserted into the center of the piston140. An upper surface of the fixed ring-shaped piston136is attached to the backing plate120. The support structure80is thus connected through the piston140, fixed ring-shaped piston136, and cylinder132of the actuator mechanism126to the backing plate120. By locating the actuator mechanism126between the backing plate120and the support structure80, the actuator mechanism126can move the contactor assembly42relatively with respect to the backing plate120. A fluid passage142is manufactured in the backing plate120. The fluid passage142extends from an external surface of the backing plate120to a location above an upper surface of the piston140. A fluid line144is connected to the fluid passage142. Pressurized air or a vacuum pressure can be provided through the fluid line144and fluid passage142to an upper surface of the piston140. The travel sensor128has an outer portion146attached to the support structure80, and an inner portion148attached to the backing plate120. Relative movement between the outer portion146and the inner portion148results in a change of inductance (or capacitance) between the outer portion146and the inner portion148. The inductance (or capacitance) can be measured to provide an indication of how far the outer portion146travels with respect to the inner portion148. The outer portion146fits within a circular opening in the backing plate, and the outer portion146additionally serves as a guide for movement of the contactor assembly42relative to the backing plate120. The wafer holder130forms part of the probing subassembly16illustrated inFIGS.1and2. The wafer holder130is mounted for movement in horizontal x- and y-directions and movement in a vertical z-direction to the prober base portion26ofFIGS.1and2. As illustrated inFIG.8, the wafer holder130with the wafer76thereon is moved in x- and y-directions until the wafer76is directly below the contactor board50. The wafer holder130is then moved vertically upwardly in a z-direction towards the contactor board50. Each one of the terminals72is aligned with a respective one of the contacts on the wafer76. The terminals72, however, do not at this stage touch the contacts on the wafer76. As shown inFIG.9, the actuator mechanism126is used to bring the terminals72into contact with the contacts on the wafer76. Pressurized air is provided though the fluid line144and the fluid passage142into a volume defined by the surfaces134and138of the cylinder132, an outer surface of the fixed ring-shaped piston136, and an upper surface of the piston140. The pressurized air acts on the upper surface of the piston140so that the piston140is moved downward relative to the backing plate120. The piston140also moves the contactor assembly42downward until the terminals72come into contact with the contacts on the wafer76. The terminals72are resiliently depressible against spring forces of the pins that they form part of. The spring forces jointly serve to counteract a force created by the pressure on the piston140. FIG.10shows the force that is created by the piston140. No force acts on the terminals inFIGS.7and8. InFIG.9, the force is increased from zero to a predetermined force. This predetermined force can be calculated by multiplying the pressure and the area of the upper surface of the piston140. The forces created by the terminals72are highly controllable because the pressure is highly controllable. The predetermined maximum force can easily be modified from one application to another. When the forces are applied by the terminals72, electric signals, power, and ground are provided through the terminals72to and from the wafer76. Integrated circuits on the wafer76are thereby tested. Once testing is completed, the pressure is relieved so that the forces exercised by the terminals72are again reduced to zero. A negative pressure is then applied, which moves the contactor assembly42away from the wafer76into the position shown inFIG.8. The wafer76is then removed by the wafer holder130and the wafer76is replaced with another wafer on the wafer holder130. It will be appreciated that the order and speed of moving the wafer holder130relative to the contactor board50actuating the actuator mechanism126to bring the terminals72into contact with the contacts of the wafer76can be varied. Differing contact algorithms can be used to move the wafer holder130and actuate the actuator mechanism126to achieve optimal contact (e.g., good electrical contact, least pad damage, etc.) for different types of wafers. The travel sensor128allows the pressure of the piston140to be set so that the piston140is roughly in the middle of its stroke when it contacts the wafer76. Wafers having differing contactor technologies and/or number of contact points may be used with the apparatus10. Different contact technologies often require a different force per pin to ensure wafer contact, and may also have different contactor heights. A different total force may be required to be applied to the contactor to make good contact with the wafer76. The travel sensor128can be used to measure the distance the piston140has extended the contactor towards the wafer76under test. Thus, wafers having these varying types of contactors can be tested using the same apparatus10. FIG.11illustrates an alignment and locking mechanism152mounted to the upper portion34of the frame12inFIGS.1and2, and one of the alignment pins40mounted to the cartridge frame38. The alignment and locking mechanism152includes an outer sleeve154, an alignment piece156, a piston158, a fluid line160, and a locking actuator162. The alignment piece156has an alignment opening164formed therein. The alignment opening164has a conical shape so that an upper horizontal cross-section thereof is larger than a lower cross-section thereof. The alignment piece156is mounted to an upper end of the outer sleeve154and extends downwardly into the outer sleeve154. The piston158is located within a lower portion of the outer sleeve154and can slide up and down within the outer sleeve154. A cavity166is defined within the outer sleeve154and by a lower surface of the piston158. The fluid line160is connected to the cavity166. Positive and negative pressure can be provided through the fluid line160to the cavity166. Positive pressure causes upward movement of the piston158, and negative pressure causes the piston158to move down. The locking actuator162has a plurality of spherical locking members168and a locking actuator170. The locking actuator170is mounted to the piston158so that it can move vertically up and down together with the piston158. The locking actuator170has an internal surface172that makes contact with the spherical locking members168. The surface172is conical so that movement of the locking actuator170between raised and lowered positions causes corresponding movement of the spherical locking members168toward and away from one another. The formation40includes a positioning pin174with a recessed formation176formed at a location distant from an end of the positioning pin174. The cartridge frame38is moved so that the positioning pin174is roughly located over the alignment opening164. When the cartridge frame38is lowered into the position shown inFIG.11, an end of the slightly misaligned positioning pin174can slide on a surface of the alignment opening164so that a center line of the positioning pin174moves towards a center line of the alignment opening164. The piston158and the locking actuator162are in a lowered position to allow for movement of a larger end of the positioning pin174through an opening defined by the spherical locking members168. FIG.12illustrates the components ofFIG.11after the formation40is lowered all the way and engaged with the alignment and locking mechanism152. A conical surface on the formation40contacts the conical surface of the alignment opening164, thereby further promoting correct alignment of the center lines of the positioning pin174and the alignment opening164. The recessed formation176on the positioning pin174is now at the same elevation as the spherical locking members168. The piston158and the locking actuator170are elevated so that the spherical locking members168engage with the recessed formation176. The positioning pin174is thereby engaged with the spherical locking members168of the alignment and locking mechanism152. The positioning pin174can be released from the alignment and locking mechanism152by first lowering the piston158so that the spherical locking members168disengage from the recessed formation176, and then lifting the cartridge frame38together with the positioning pin174out of the alignment opening164. It may from time to time be required that a cartridge18be temporarily removed for purposes of maintenance or reconfiguration, or be replaced with another cartridge. The formation40and the alignment and locking mechanism152allow for quick removal and replacement of cartridges. FIG.3illustrates one and a piece of the alignment pins40. Only piece of the cartridge18is illustrated inFIG.3and the entire cartridge is in fact symmetrical about the section through one of the alignment pins40. The other piece of the sectioned formation40and another one of the formations are not shown. There are thus a total of three of the alignment pins40respectively at corners of a triangle. Each one of the alignment pins40engages with a corresponding alignment and locking mechanism152. The three alignment and locking mechanisms152are all simultaneously and remotely actuable from a common pressure source connected to corresponding fluid lines160, to cause simultaneous engagement or disengagement of all three locking alignment pins40. As previously mentioned, with reference toFIGS.1and2, the test head20can be moved to the position shown inFIG.2for purposes of maintenance to the cartridge18. The cartridge18can also be replaced as discussed with reference toFIGS.11and12. Following maintenance and/or replacement of the cartridge18, the test head20is pivoted onto the cartridge into the position shown inFIG.1. FIG.13illustrates portions of the test head and cartridge18after the test head20is moved down onto the cartridge18, i.e., from the position shown inFIG.2into the position shown inFIG.1. The test head20has a second connector set180and an engager182mounted to the test head frame portion30of the frame12ofFIG.1. The second connector set180is initially disengaged from one of the first connector sets44of the cartridge18. The first connector set44includes a connector block support piece184, a first connector module186, and a first engagement component188. The first connector module186includes a first connector block190and a plurality of septa192. The septa192are held in a side-by-side relationship by the first connector block190.FIG.14illustrates one of the septa192in more detail. A plurality of conductors is formed behind one another into the paper against each septum192. Each conductor includes a terminal196at a lower edge of the septum192, a contact198at an upper edge of the septum192, and an electrically conductive lead200interconnecting the terminal196with the contact198. Referring again toFIG.13, a number of the flexible attachments46are attached through respective connectors202to the terminals196ofFIG.14. The septa192provide for a dense arrangement of the terminals196and contacts198held by the first connector block190. The first connector module186is inserted into the connector block support piece184with the first connector block190contacting an inner portion of the connector block support piece184. The first connector module186is then secured to the connector block support piece184by releasable means so as to again allow for removal of the first connector module186from the connector block support piece184. The first engagement component188has inner and outer portions204and206respectively. The inner portion204is mounted to an outer portion of the connector block support piece184for pivotal movement about a horizontal axis208. A spring210biases the first engagement component188in a counter-clockwise direction212. The outer portion206has a spherical inner engagement surface214and a groove216as formed into the engagement surface214. A slider pin218is secured to and extends vertically upwardly from one of the upper support pieces122of the cartridge frame38. A complementary slider opening220is formed vertically through the connector block support piece184. The slider opening220is positioned over the slider pin218, and the first connector set44is moved down until the connector block support piece184rests on the upper support piece122. The first connector set44is thereby held by the slider pin218of the cartridge frame38and prevented from movement in horizontal x- and y-directions. The first connector set44can still be removed from the cartridge frame38by lifting the first connector set44out of the slider pin218, for purposes of maintenance or reconfiguration. The second connector set180includes a subframe222, a second connector module224, a cylinder226, a piston228, a rod230, a spherical engager232, a connecting piece234, and first and second supply lines236and238respectively. The subframe222is mounted to the test head frame portion30. The second connector set180is mounted through the subframe222to the test head frame portion30. The second connector set180has a second connector block240and a plurality of printed circuit boards242mounted in a side-by-side relationship to the second connector block240. Each one of the printed circuit boards242has a respective substrate, terminals on a lower edge of the substrate, contacts at an upper edge of the substrate, and electrically conductive traces, each connecting a respective terminal with a respective contact. The second connector block240is releasably held within the subframe222and secured to the subframe222with releasable means. The cylinder226is secured to the subframe222. The piston228is located within the cylinder226and is movable in vertically upward and downward directions within the cylinder226. First and second cavities are defined within the cylinder226respectively above and below the piston228, and the first and second supply lines236and238are connected to the first and second cavities, respectively. An upper end of the rod230is secured to a piston228. The rod230extends downwardly from the piston228through an opening in a base of the cylinder226. The spherical engager232is secured via the connecting piece234to a lower end of the rod230. The connecting piece234has a smaller diameter than either the rod230or the spherical engager232. The engager182includes a plate246that is mounted to the subframe222for pivotal movement about a horizontal axis248, an actuator assembly201, and a link mechanism252connecting the plate246to the actuator assembly201. The actuator assembly201includes an actuator250, a connecting rod253, an actuator pivot251, and a rod pivot255. As previously mentioned, the second connector set180is initially disengaged from the first connector set44. The second connector module224is thus disengaged from the first connector module186and the spherical engager232is also disengaged from the first engagement component188. Pressurized air is provided through the first supply line236while air is vented from the second supply line238, so that the piston228moves in a downward direction within the cylinder226. Downward movement of the piston228extends the rod230further out of the cylinder226and moves the spherical engager232closer to the cartridge18. As illustrated inFIG.15, the actuator assembly201is operated so that the link mechanism252moves the plate246in a counterclockwise direction254. The plate246comes into contact with an outer surface256of the first engagement component188. Further movement of the plate246rotates the first engagement component188in a clockwise direction258and in a camming action. A fork defined by the groove216moves over the connecting piece234, and the engagement surface214moves into a position over at the spherical engager232. As illustrated inFIG.16, pressurized air is provided through the second supply line238, and air is vented through the first supply line236so that the piston228moves in a vertically upward direction. The rod230retracts in an upward direction into the cylinder226. An upper surface of the spherical engager232engages with the engagement surface214and moves the first engagement component188towards the cylinder226. The first connector set44lifts off the upper support piece122of the cartridge frame38, and the connector block support piece184slides up the slider pin218. The pressurized air provided through the second supply line238also creates a force that is sufficiently large to overcome an insertion force required to mate the first connector module186with the second connector module224. Each one of the septa192enters into a gap between two of the printed circuit boards242. Gaps between the contacts198on the septa192and the gaps between the printed circuit boards242are sufficiently small so that an interference fit is required to insert the septa192between the printed circuit boards242. Once the insertion force is overcome and the septa192are located between the printed circuit boards242, each one of the contacts198is located against a corresponding terminal on a lower edge of one of the printed circuit boards242. The pressurized air provided through the second supply line238can be removed after the first and second connector modules186and224are mated. The first and second connector modules186and224can be disengaged from one another by providing pressurized air through the first supply line236so that the first connector set44moves into the position as shown inFIG.15. The actuator assembly201is then operated and the plate246moves into the position shown inFIG.13. The spring210biases the first engagement component188in the counterclockwise direction212away from the spherical engager232. The rod230is then typically again retracted into the cylinder226. As illustrated inFIG.17, cartridge38has four of the upper support pieces122, and a respective pair of the upper support pieces122carries a respective column of the first connector sets44. The columns are located next to one another so that a respective pair of the first connector sets44is in a respective row. There can be a total of 16 rows in each of the two columns, thus potentially forming an array of 32 of the first connector sets44. Each one of the first connector sets44is symmetrical on the left and the right. The connector block support piece184entirely surrounds the first connector module186, and two slider openings (220inFIG.13) are provided at opposing ends of the connector block support piece184. Slider pins218are provided on all four of the upper support pieces122, and each respective connector block support piece184has two slider openings220respectively located over two of the slider pins218. As shown inFIG.18, an array of second connector modules224is provided, matching the array of first connector modules186ofFIG.17. Two spherical engagers232are located on opposing sides of each one of the second connector modules224. In use, a respective pair of spherical engagers232is used to engage one of the first connector modules186with one of the second connector modules224independently of the other connector modules. One of the first connector modules186is engaged with one of the second connector modules224, where after another one of the first connector modules186is engaged with another one of the second connector modules224, etc. By staggering the engagement of a respective first connector module186with a respective second connector module224, forces on the subframe222and other pieces of the frame12ofFIG.1can be kept within their design parameters. Each one of the plates246is located adjacent a plurality of the spherical engagers232. Movement of a respective one of the plates246causes the respective plate246to contact an simultaneously pivot a plurality of the first engagement components188ofFIG.13over a plurality of respective ones of the spherical engagers232. Referring toFIGS.18and19in combination, each one of the second connector modules224is mounted to respective pattern generator, driver, and power boards,260,262, and264respectively, each residing in a respective slot of a base structure266. As specifically shown inFIG.19, access can be gained to the boards260,262, and264by rotating the thermal system frame portion28together with the test head frame portion30an additional 135 degrees counterclockwise to the left when compared toFIG.2, and then rotating the test head frame portion30relative to the thermal system frame portion2890 degrees clockwise to the right. The thermal system24is then positioned on the ground and the test head20in a vertical orientation. The boards260,262, and264are all accessible from the left within the test head20because the test head20and the thermal system24have been separated from one another. The boards260,262, and264that reside in the slots of the base structure266are then removable, replaceable, and other boards can be added, for purposes of reconfiguration. Each one of the slots can only carry one particular type of board260,262, or264. The base structure266is configurable so that slots are configurable to allow for more or fewer of a particular board, or to modify the locations of particular boards. Once the slots are inserted, they are typically not replaced over the life of the apparatus10. The number of boards260,262, and264that are used can still be configured from one application to the next.FIG.20illustrates an example of a layout of slots in the test head20. The particular layout of slots ofFIG.20allows for the use of two pattern generator boards260, one on the left and one on the right; six driver boards262, three on the left and three on the right; and 24 power boards264, twelve on the left and twelve on the right. After the boards260,262, and264are inserted into the slots as discussed with reference toFIGS.19and20, the apparatus is first moved into the configuration illustrated inFIG.2with the thermal system24above the test head20, and then into the configuration illustrated inFIG.1, with the components of the test head20electrically connected to the components of the cartridge18inFIG.2. Referring specifically toFIG.1, what should be noted is that the thermal system24does not rest on the test head20. Any vibrations caused by components of the thermal system24can thus not be directly transferred to the test head20. The test head20and the thermal system24are held in the relative orientation shown inFIG.1with the thermal system24above the test head20by the thermal system frame portion28and the test head frame portion30, respectively, of the frame12. The frame12is relatively heavy and has a rigid construction, and effectively dampens any vibrations created by components of the thermal system24. The vibrations substantially do not reach the components of the test head20. FIG.21illustrates how the thermal system24cools components of the test head20.FIG.21is a partial cross-sectional view parallel to a plane of one of the boards260,262, and264ofFIG.20, and shows one of the driver boards262and one of the power boards264inserted into their respective slots of the base structure266of the test head20. The test head20further has two manifold panels268mounted on opposing sides and at upper portions of the base structure266. The base structure266has openings between the slots that allow for air to flow from the manifold panels268inward to the boards262and264, and then from the boards262and264to an upper end exhaust270. The thermal system24includes an outer shell272, four recirculation fans274(only two of the recirculation fans274are shown inFIG.21; the other two recirculation fans are located behind the recirculation fans274that are shown inFIG.21), and two heat exchangers276. The air leaving the upper end exhaust270is sucked through the recirculation fans274into the outer shell272. Recirculation fans274then force the air through the heat exchangers276, whereafter the air enters through upper end inlets278defined by the manifold panels268. By recirculating the air, heat convects from the boards262and264to the heat exchangers276. As is commonly known, each heat exchanger276includes a plurality of fins280and tubing282connecting the fins280to one another. A cooling fluid such as liquid water is pumped through the tubing282. The heat convects to the fins280. The heat conducts from the fins280to the tubing282. The heat then convects from the tubing282to the water and is pumped away. What should be noted is that there is no physical contact between any components of the thermal system24and any components of the test head20. Only a small gap284is defined between the outer shell272and the manifold panel268. A seal is typically located in the gap284, and is made of a compliant material so that any vibrations transferred by the fan274to the outer shell272do not transfer to the manifold panels268. Guide panels286form part of the thermal system24, and serve to prevent the air from entering the test head20before first passing through the fans274and the heat exchangers276. FIG.22illustrates software and hardware components of the apparatus10ofFIG.1that cooperate and that are matched to one another for fanning-out and fanning-in of electric signals, power, and ground. Zones are defined, wherein each zone includes one pattern generator board260, one or more driver boards262, and one or more power boards264connected to one another. Each board260,262, and264has a number of resources or channels. In particular, a driver board262has a number of input/output channels, and the power board264has a number of power channels. The number of boards260,262, and264and the way that they are connected to one another are configurable, depending on the requirements of integrated circuits of devices300and the layout of the devices300of the wafer76. An interconnection scheme302connects the driver and power boards262and264to contacts on the devices300. The interconnection scheme302includes the electrical paths formed by conductors within the cartridge18ofFIG.3. The interconnection scheme302is also configurable, as will be appreciated from the foregoing description of the cartridge18. The boards260,262, and264and the interconnection scheme302are hereinafter jointly referred to as a tester system304. A local controller306is used to provide test instructions to the tester system304and is then used to upload and process test results from the tester system304. The local controller306has memory and, stored in the memory, are a test program308, a configuration file310, a test application312, a test results file314, a processing application316, and a test report318. Reference should now be made toFIGS.22and23in combination. The test program308has a series of instructions written by a test programmer to test one of the devices300(step400). The following is an extract of such a program: setdps (“v NORMAL 1”, “Vcc”, 3.0 V, 0.0 V, 11.0 V); setdps (“v NORMAL 1”, “Vcd”, 4 V, 0.0 V, 11.0 V); setsps (“v NORMAL 1”, “Vio”, 0 V, 3.3 V); setsps (“v NORMAL 1”, “Vclk”, 0 V, 3.3 V); setsps (“v NORMAL 1”, “Vcs”, 0 V, 3.3 V); setpps (“v NORMAL 1”, “Term 1”, 1.0); settps (“v NORMAL 1”, “Term 2”, 1.0); setthps (“v NORMAL 1”, “CompH”, 1.5); setthps (“v NORMAL 1”, “CompL”, 0.9). The test application312utilizes both of the test program308and data from the configuration file310and data from the test results file314to provide instructions to the boards260,262, and264(step402). The boards260,262, and264then provide electric signals, power, or ground through respective conductors of the interconnection scheme302(step404). The configuration file310has data representing a relationship between the channels of the boards260,262, and264and the contacts of the devices300. The configuration file310will be different from one configuration assembly to another configuration assembly of the tester system304. The configuration file310thus represents how the instructions of the test program308are fanned out through the tester system304to the devices300. Each device300is tested with the same test program308(step406), although the voltage and signal levels may be modified based upon the test result file314. The following table is an extract of the configuration file310with field names listed at the top of each column: PWRZONESLOTCHANNELRABMODULECHANNELCONNPADTERMCOMMONNUMBERNUMBERTYPENUMBERNUMBERNUMBERCOLUMNROWTYPELABELLABELKEYMASK229HVOL110516DCE000222DRV_CS−1−10120DOECS_00016DRV_CS−1−103715DOECS_00016DRV_UCLK−1−103515DDQ1A_000222DRV_UCLK−1−10720DDQ1A_000222DRV_UCLK−1−112025DDQ1B_100222DRV_IO−1−11221DDQ7I/O_10016DRV_UCLK−1−111510DDQ1B_100222DRV_CS−1−11321DOECS_10016DRV_CS−1−113214DOECS_10016DRV_UCLK−1−11152DDQ1B_10016DRV_UCLK−1−111714DDQ1B_10016DRV_IO−1−113713DDQ7I/O_10016DRV_CS−1−11286DOECS_10016DRV_UCLK−1−111614DDQ1B_100222DRV_UCLK−1−11625DDQ1A_100222DRV_CS−1−111017DOECS_100222DRV_CS−1−111121DOECS_100222DRV_UCLK−1−112121DDQ1B_10016DRV_UCLK−1−111610DDQ1B_100222DRV_CS−1−11221DOECS_100222DRV_UCLK−1−112317DDQ1B_100222DRV_CS−1−11121DOECS_100222DRV_CS−1−11917DOECS_10016DRV_UCLK−1−11162DDQ1B_10016DRV_CS−1−11273DOECS_10016DRV_UCLK−1−113614DDQ1A_100222DRV_CS−1−111632DOECS_10016DRV_UCLK−1−11186DDQ1B_10016DRV_CS−1−113410DOECS_100222DRV_UCLK−1−11621DDQ1A_10016DRV_CS−1−113114DOECS_10016DRV_UCLK−1−11326DDQ1A_100222DRV_UCLK−1−11825DDQ1A_100 The fields at the top of the columns of the table above stand for the following: ZONE NUMBER: index to indicate membership to a pattern zone, determined by pattern generator board260. SLOT NUMBER: location of a driver or power board262or264. CHANNEL TYPE: type of hardware resource to be used. RAB NUMBER: index of reference and acquisition module on the power board264, or −1 if not applicable. PWR MODULE NUMBER: power module on power board264. CHANNEL NUMBER: resource index of given board262or264. COLUMN, ROW: position of the device266on the wafer (or testboard). CONN TYPE: connection type; D for device, or T for termination; whether a resource influences a device directly, or provides auxiliary electrical characteristics to the test assembly. PAD LABEL: designator for the terminal72or pin68that the resource is connected to; this label is then used for programming purposes. TERM LABEL: option label for a termination pin. COMMON KEY: option sort key. MASK: field to determine whether a device should be tested or not. Some resources are provided separately to each of the devices300. For example, there may be a total of 600 of the devices300, and each device may require a separate input/output line connected through the interconnection scheme302. Other resources may be shared in order to reduce the number of electrical paths that are provided through the interconnection scheme302. For example, a single input/output line320can be provided through the interconnection scheme302, and at the last level within the interconnection scheme302be fanned to a set (or all) of the devices300. An input/output signal is thus provided to all the devices300of the set. A chip select line322can be accessed to select a subset of the devices of the set to which the input/output line320is connected. Unique chip select line combinations are then grouped into chip select states. FIGS.24A and24Billustrate the data structure of the configuration file310(“cartconf”). The configuration file310includes both a wafer requirement data structure (wafer_reqs) and a shared resources map (cs_map) representing the chip select states. Descriptions of the respective fields and what the fields represent are described inFIGS.24A and24B. Again referring toFIGS.22and23, a response from each one of the devices300is provided through the interconnection scheme302and stored in memory of the driver and power boards262and264(step408). The system software uploads the responses from the driver and power boards262and264into the test results file314(step410). The test results file314has raw data wherein the test results of all the devices300are collated. The test results file314is provided to a processing application316. The processing application316utilizes the configuration file310to interpret the test results file314in such a manner that the test results of individual ones of the devices300are extracted from the test results file314(step412). The processing application316then publishes the test report318(step414). The test report318is typically a two-dimensional map on a computer screen with cells representing the devices300, wherein functioning and defective devices are shown in different colors. The test results file314is also to be used by the test application312to modify the instructions provided to boards260,262and264. FIG.25illustrates a software assembly application420that is used for constructing the configuration file312ofFIG.19. The application420includes a plurality of net files422, an input module424, and an assembly module426. The net files422each represent a scheme of current passing through conductors of a respective electrical subassembly. For example, the net file422A is a pattern generator board net file representing the flow of current through one of the pattern generator boards260ofFIG.19. Similarly, the driver board net file422B and power board net file422C respectively represent flow of current through conductors through one of the driver boards262and one of the power boards264. The interconnection scheme302also has multiple components, and a respective net file422D or422E represents flow of current through a respective component of the interconnection scheme302. Referring now toFIGS.25and26in combination, the net files422are first stored in memory of a computer system on which the software assembly application418resides (step450). The input module has an interface with a list of the components that can make up the tester system304. The list includes one pattern generator board, one driver board, one power board, and one type of each component that can make up the interconnection scheme302. The input module424also allows an operator to select how many of the components on the list are used to assemble the tester system304, and how the components are connected to one another. For example, the operator can select two pattern generator boards and three driver boards, one of the driver boards being connected to one of the pattern generator boards and the other two driver boards being connected to the other pattern generator board (step452). The assembly module426then uses the input provided by the operator via the input module424and the net files422to assemble the configuration file310. In the given example, the assembly module426will construct the configuration file310so that it has data representing two pattern generator net files422A and three driver board net files422B, with one driver board net file422B being associated with one pattern generator board net file422A and the other two pattern generator net files422B being associated with the other pattern generator board net file422A (step454). The configuration file310can then be transferred from the computer system on which the software assembler application420resides to the local controller306ofFIG.22. FIG.27illustrates some of the components hereinbefore described and some additional components of the apparatus10. The components hereinbefore described include the cartridge18that has the contactor assembly42, the flexible attachments46, two of the power boards264, one of the driver boards262, one of the pattern generator boards260, and the local controller306. Two types of power boards264V and264C are used, for high voltage and high current respectively. Each power board264V or264C has eight logical groups of 64 channels, and therefore 512 channels in total. The high-voltage power board264V can provide a voltage output of 0.5 V to 12 V at a current of at least 200 mA for each channel. The high-current power board264C can provide an output of 0.1 V to 5 V at a current of at least 500 mA. The locations of the boards260,262, and264have been described with reference toFIG.20. Each one of the power boards264V or264C is connected to the contactor assembly42through four dedicated power flexible attachments46P. The driver board262is connected to the contactor assembly42through dedicated signal flexible attachments46S. The flexible attachments46have been described with reference toFIG.3. The flexible attachments46connecting at site92at the distribution board48also provide alternating current (AC) ground from the contactor assembly42to the boards262and264. The apparatus10further includes a ground plate460and a Bussed low-voltage differential signaling (LVDS) backplane462mounted within the test head20. The power boards264V and264C and the driver board262each have two direct current (DC) connection pins508, as illustrated inFIG.18, that connect to the ground plate460. The DC pins508also pass through the ground plate460and connect to the block support piece184, shown inFIG.17. DC ground cables464connect the block support piece184to the signal distributor board48, shown inFIG.4, at the DC connection site461, illustrated inFIG.6, and thereby provide a DC ground path from the boards262and264, the contactor assembly42, and the wafer76.FIG.3illustrates connectors466to which the DC ground cables464are attached at the block support piece184of the cartridge18. The boards260,262,264C, and264V each have a connection that connects respective board to the Bussed LVDS backplane462. A logical link is thereby provided between the boards260,262,264C, and264V, allowing the boards to communicate with one another. It is also the Bussed LVDS backplane462that provides the logical link between the boards260,262, and264illustrated inFIG.22. The apparatus10further has a system control bay470that includes a bulk die power supply472V for high voltage, a bulk die power supply472C for high current, the local controller306described with reference toFIG.22, and a system controller474. The bulk die power supply472V can provide a voltage of 0.5 V to 13 Vat 110 A, and the bulk die power supply472C can provide a voltage of 0.5 V to 7 V at 200 A. The bulk die power supply472V is connected through respective power cables476to power board(s)264V. Similarly, the bulk die power supply472C is connected through respective power cables476to power board(s)264C. An Ethernet link478connects and networks the bulk die power supplies472V and472C, the local controller306, the system controller474, and the boards260,262,264C, and264V with one another. The local controller306controls the boards260,262,264C,264V, and474through the Ethernet link478and peripheral components of the apparatus10. FIG.28illustrates one of the power boards264V or264C and its connections to the ground plate460, and power flexible attachments46P. A board-level control and bulk power control490is connected to the Ethernet link478. A board power control492and calibration control494are connected to the board-level control and bulk power control490. The board-level control and bulk power control490, device power timing system500, and the calibration control494are connected to a reference and measurement system496and provide a series of instructions to the reference and measurement system496. The instructions have been described with reference toFIG.22(the instructions that are provided by the board-level control and bulk power control490, the device power timing system500, and calibration control494to the reference and measurement system496have, for purposes of explanation, been equated to chords in a music score). The pattern generator board260has a pattern generator power timing bus that is connected through the Bussed LVDS backplane to a device power timing system500. The device power timing system500is connected to the reference and measurement system496. The device power timing system500provides both timing and instructions to the reference and measurement system496for purposes of carrying out the instructions that are provided from the board-level control and bulk power control490and calibration control494(the functioning of the device power timing system500has, for purposes of explanation, been equated to an orchestra conductor that provides both timing and instructions of which chords are to be played). The reference and measurement system496includes eight logical systems of 64 channels each, thus totaling 512 channels. Inputs into the reference and measurement system include signals from the pattern generator index bus, pattern generator clocks, calibration reference, and ground sense. The reference and measurement system496performs voltage readback and current readback. Output from the reference and measurement system496includes four voltage references and device power control through a device power control bus. Output from the reference and measurement system496thus includes logic for purposes of controlling power. The reference and measurement system496and board-level control and bulk power control490are connected to a device power output system502. A positive side of the bulk die power supply472V or472C is also connected to the device power output system502through cable476. The device power output system502regulates the power from the bulk die power supply472V or472C, utilizing the signal from the reference and measurement system496(the power provided by the bulk die power supply472V or472C has, for purposes of explanation, been equated to power or air that is provided simultaneously to a number of music instruments in an orchestra). The device power output system502includes 16 sections of 32 channels, grouped into 8 logical groups, thus totaling 512 channels. Each channel includes a Kelvin sense system, each system including one force (+F) and one sense (+S) line, so that there are a total of 1,024 pins and circuits. Input into the device power output system502includes references, bulk power, control parameters from board-level control and bulk power control490, and device power control through the device power control bus. The device power output system502also provides voltage and current readback to the reference and measurement system496and channel status information to the board-level control and bulk power control490. Four of the power flexible attachments46P are connected to the device power output system502. Each power flexible attachment46P includes 128+F lines, 128+S lines, AC ground, and ground sense. Two ground sense traces from each power flexible attachment46P, thus totaling eight traces, are connected to a board ground control system506. The board ground control system506averages eight measurements from the ground sense traces, and provides the averaged result as an output to the reference and measurement system496. A ground pin508is connected to the ground plate460and the first connector sets44. The ground pin508is connected to both the device power output system502and to a board power system510. The board power system510has a separate 48 V input, and can provide, for example, outputs of 15 V, 5 V, 3.3 V, −3.3 V, and 1.2 V. The DC ground cables464are connected to the block support piece184. The negative side of the bulk die power supply472V or472C is also connected through the power cable476to the ground plate460. What should be noted is that separate paths are provided for AC ground and for DC ground. AC ground is provided through the flexible attachments46P that also deliver the power. The physical space between F+ power provision, the S+ line, and AC power ground in a power flexible attachment46P is extremely small, typically on the order of between 0.002 and 0.010 inches. Such a small space allows for a substantial reduction in noise and an increase in speed, which is particularly important for accurate measurement through the 512 sense lines and clean power delivery through the F+ lines. DC ground is provided through the DC ground cables464. The AC and DC grounds have, for example, respective resistances of between 0.5 and 1.5 ohms and 0.003 and 0.015 ohms. FIG.29illustrates components of the device power output system502in more detail. The device power output system502includes only a single one of subsystem A. The subsystem B is replicated 512 times and is in eight groups of 64, and the 512 subsystems B are connected in parallel to the subsystem A. The subsystem C is replicated eight times, and the eight subsystems C are connected in parallel to the subsystem B. Subsystem A includes die bulk power supply472and power cables476which include an AC-to-DC conversion circuit comprising an inductor I and a capacitor C1connecting an output terminal of the inductor I to ground and is controlled by board-level control and bulk power control490and local controller306through478. An input terminal of the inductor I is connected to the die bulk power supply472V or472C inFIG.27. A stepped voltage cycle is provided to an input terminal of the inductor I. An amplitude and a period of the stepped voltage cycle always remain constant, but an amount of time that the voltage is high during a particular period can be modulated. The total amount of time that the voltage is high can thus be modulated from a small percentage of the total time to a large percentage of the total time. The inductor I and capacitor C1convert the voltage step to a DC voltage. The DC voltage can thus also be modulated, depending on the percentage of time that the voltage provided to the input terminal of the inductor I is high. The die bulk power supply472V or472C allows for a variable voltage to be created per power board264. The DC voltage can thus be modulated, depending on the need to control power dissipation in the device power output system502. The reference and measurement system496allows for 16 different voltages to be created per group of 64 channels. Different voltages can be provided to different groups of 64 channels at a particular moment in time. The DC voltage created by the subsystem B is provided through a force F+ line through a power terminal72P to a power contact74P of a respective device300(see also reference numerals72and74inFIG.4). A sense line S+ is connected to the power terminal72or56and detects a voltage at the power terminal72. The voltage detected by the sense line S+ is provided through a resistor R2, an amplifier A3, and a resistor R1to control a MOSFET1located in the force line F+. The amplifier A3also receives at its positive terminal an input (Vref) through a switch594. The amplifier A3is set so that the voltages provided at its positive and negative terminals are combined to provide an output voltage to the MOSFET1. The voltage Vrefout provides an input voltage, which is the desired voltage provided to the power terminal72P, and the sense line S+ provides a feedback through the amplifier A3to keep the voltage provided to the MOSFET1, and therefore the power terminal72P, at a steady state. The amplifier A3provides a voltage (Vrefout+VGS), in this case 2.3 V, to the MOSFET1if the voltage provided by the subsystem A is 1.5 V and the power terminal72P requires a voltage of 1 V. The MOSFET1dissipates heat equivalent to a difference between the voltage provided by the subsystem A and the voltage on the force line F+, multiplied by the current. For example, the voltage provided by the subsystem A can be 1.5 V, and the force line F+ can provide a voltage of 1 V. If the current is 1 A, the power dissipated by the MOSFET1is 0.5 W. Should the voltage provided by the subsystem A always be a maximum of, for example, 12 V, the MOSFET1would have to dissipate 11 W. The variable power provided by the bulk die power supplies472V and472C inFIG.27thus substantially assist in reducing the amount of energy, and therefore heat, dissipated by the MOSFET1. A resistor R3is connected between the force and sense lines F+ and S+ and resistively connects the F+ to the S+ of the amplifier A3. The resistor R3serves to control the amplifier A3in case of a failure by holding the force and sense lines F+ and S+ to similar voltages. The resistor R3is thus just a safety device in case of contact failure. The subsystem B also includes a circuit that automatically switches power to the device300off upon the detection of an overcurrent, among other things. The overcurrent detection and switching circuit includes a resistor R6located after the MOSFET1in the force line F+. A voltage over the resistor R6is linearly related to a current through the force line F+. An amplifier A1amplifies the voltage detected over the resistor R6. A comparitor A2compares an output from the amplifier A1to a current set point supplied by reference and measurement system496. An output from the comparitor A2would be zero if the output from the amplifier A1is the same as, or greater than, the current set point. The output from the comparitor A2provides an indication of an overcurrent or undercurrent through the resistor R6. The output from the comparitor A2is provided to a field programmable gate array (FPGA)1. The FPGA1has logic that determines whether the over- or undercurrent is sufficient to switch subsystem B off. The FPGA1also provides for a timing delay before switching the current off, to allow for brief surges in current without switching the current off. An output of the FPGA1is provided to a switch1and a switch2594. During normal operating conditions, i.e., when the current should continue to flow, the switch1is switched into its “off” position and the switch2in its “A” position. A voltage of 15 V is provided through a resistor R5to one terminal of the switch and to a MOSFET2located after the resistor R6in the force F+ line. During normal operating conditions, the voltage provided through the resistor R5maintains the MOSFET2in an “on” position, thereby allowing current to flow through the force line F+. Should an overcurrent be detected, the FPGA1switches the switch1to its “on” position, thereby grounding the voltage provided through the resistor R5, the MOSFET2will switch into its “off” position and disconnect the current, and switch2is set to the “B” position, shutting down the amplifier A3. What should be noted is that each one of the 512 subsystems B has its own overcurrent detection and switching circuit. The 512 overcurrent and switching circuits allow for currents to one or more of 512 individual devices to be switched off, while current to the other devices continues to flow. Current measurement and voltage measurement can also be done on a per-device level, because each one of the subsystems B has a respective current measurement line (Imeas), and a respective voltage measurement line (Vmeas). The current measurement line Imeas is connected to an output of the amplifier A1, and the voltage measurement line Vmeas is connected to the sense line S+. The current and voltage measurement lines Imeas and Vmeas allow for real-time measurement of current and voltage provided to the power terminal72P. The subsystem B also includes a switching circuit having a resistor R4and a MOSFET3. The resistor R4is connected to the force line F+ after the MOSFET2, and the MOSFET3is connected in series after the resistor R4. A test signal (Test) can be provided to the MOSFET3, thereby drawing current through the force line F+ for self-testing. A high-frequency response is required for the circuit that includes the resistors R1, R2, and the amplifier A3. For this purpose, a capacitor C3is provided in parallel with the integrated circuit of the device300. The capacitor C3is built into the support structure80shown inFIG.4. The force line F+ should have a relatively low inductance to allow for proper functioning of the capacitor C3and high-frequency response of the circuit, including the resistors R1and R2and the amplifier A3. For this purpose, the force line F+ includes two sets of parallel power conductors590and592, respectively. The subsystems A and B are connected to a single substrate with the conductors590of the first set are traces that are formed on the substrate. The conductors590all have first ends that are connected to one another and second ends that are connected to one another, so that middle sections of the conductors590conduct current in parallel. The second ends of the conductors590are connected to a common pin. The conductors592are in the form of individual electric lines in a respective power flexible connection46P. First ends of the conductors592are connected to one another and second ends of the conductors592are connected to one another, so that middle sections of the conductors592conduct the current received from the conductors590, in parallel. The second ends of the conductors592are all connected to one power terminal72P. The distribution board48has two ground sense contacts at each interface92. Ground sense terminals at each interface92connect to the ground sense contacts74G. Eight ground sense lines are provided to a grounding modulation circuit, including an amplifier A4and a filter201. The voltage detected at the ground sense contact74G is added by the ground modulation circuit to a variable input voltage (Vrefin). Ideally, the voltage detected at the ground sense contact74G is 0 V, in which case the voltage variable Vrefin would be equal to the voltage Vrefout. If the voltage detected at the ground sense contact74G is not zero, for example, it is 0.1 V, then Vrefout would be driven to 1.1 V (Vrefin+0.1 V). The voltage provided to the negative terminal of the amplifier A3would then also be 1.1 V, and the voltage provided to the power terminal74P would be 1.1 V. FIG.30illustrates one channel of the driver board262shown inFIGS.22and27. The same signal illustrated inFIG.30is replicated for each of multiple channels of the driver board262. Also illustrated inFIG.30are multiple ones of the devices300and their respective ground sense contacts72G. Voltages detected by respective ground sense terminals on the ground sense contacts74G (or72G) are averaged and provided to a filter700. Under normal operating conditions, the voltage provided to the filter700would be 0 V. There may sometimes be a small deviation from 0 V, for example, 0.1 V. The 0.1 V is provided by the filter700to a positive terminal of an amplifier A4. A negative terminal of the amplifier A4is then also driven to 0.1 V. One resistor R9is connected between the negative terminal and an output of the amplifier A4. A resistor R10, having the same resistance as the resistor R9, is also connected to the negative terminal of the amplifier A4. A 10 V voltage source702is connected over the resistors R9and R10. Two terminals of the voltage source702are then 5 V above and 5 V below the voltage at the negative terminal of the amplifier A4, and thus at −4.9 V and 5.1 V, respectively. The terminals of the 10 V voltage source702are connected to respective terminals R+ and R− of a digital-to-analog converter (DAC)704. The DAC704also has output terminals, and has the ability to switch each output terminal to a voltage between −4.9 V and 5.1 V. A microprocessor bus705is connected to the DAC704. Information representing desired high and low voltages can be loaded from the microprocessor bus705into the DAC704. The DAC704can, for example, be programmed with a high voltage of 3 V and a low voltage of 2 V. Because the voltage provided to the positive terminal of the amplifier A4is at 0.1 V, the output terminals of the DAC are, in this example, held at 3.1 V and 2.1 V, respectively. The output terminals of the DAC are connected to high-voltage and low-voltage (VH and VL) terminals of a voltage switch706. The pattern generator board260illustrated inFIGS.22and27provides a signal source708to a signal terminal of the voltage switch706. The voltage switch is a bus switch in the present example, having a 5 V power supply voltage. The signal source708switches between alternating true and false states. In a true state, a first terminal of the switch706connected to the high-voltage VH is connected to an output of the switch706, and in a false state, the terminal connected to the low-voltage VL is connected to the output of the switch706. The output of the switch706thus switches between 3.1 V and 2.1 V in response to the signal source708. A damping circuit, including a resistor R11and a capacitor C4, has an input connected to the output of the switch706. The resistor R11has one terminal connected to the switch706, and an opposing terminal of the resistor R11is connected through the capacitor C4to ground. An effect of the damping circuit represented by the resistor R11and capacitor C4is that a slew rate of a signal provided on the output of the switch706is reduced. The switch706provides a square wave at its output, and the damping circuit has an output that responds to the square wave in a non-square fashion. Specifically, the voltage on the output of the damping circuit increases more slowly than the voltage provided to the input of the damping circuit. The response voltage of the damping circuit is provided to an amplifier A5with a gain of two, and then through a switch708to respective signal contacts74S (see also reference numeral74inFIG.4) of the devices300. Because the signal provided to the devices300is dampened, ringing can be reduced or be eliminated. FIG.31illustrates a prior art solution, wherein a termination damping circuit is provided at a termination of one device. The termination damping circuit provides a dampening effect at the device that is being tested. However, the functioning of the termination depends to a large extent on the length of a line connected to the device that is being tested. As illustrated inFIG.30, the signal contacts74S can be at different distances from the damping circuit, as measured along a length that current flows in the circuit, and can be used without a termination damping circuit. Furthermore, the signal contacts74S can be spaced differently from one application to another, for example, by 10 inches in one application and 18 inches in another application, and the same damping circuit will reduce ringing in each application. While certain exemplary embodiments have been described and shown in the accompanying drawings, it is to be understood that such embodiments are merely illustrative and not restrictive of the current invention, and that this invention is not restricted to the specific constructions and arrangements shown and described since modifications may occur to those ordinarily skilled in the art.
67,682
11860222
DETAILED DESCRIPTION Exemplary embodiments will be described here in detail, and examples thereof are represented in the accompanying drawings. When the following description relates to the accompanying drawings, unless otherwise indicated, the same numbers in different accompanying drawings represent the same or similar elements. The implementations described in the following exemplary embodiments do not represent all implementations consistent with the disclosure. On the contrary, they are merely examples of apparatuses and methods consistent with some aspects of the disclosure as detailed in the appended claims. An influence of the crosstalk effect on the performance of the circuit can be estimated based on post-layout simulation in the design process of the integrated circuit. The post-layout simulation includes extracting parasitic parameters and adding the parasitic parameters into simulation. However, in fact, the influence of the crosstalk effect is related to the magnitude of parasitic capacitance and is also related to an input test case. The input test case is different, and a final result is also different. At present, the test cases of the integrated circuit subjected to the post-layout simulation cannot fully include all the application scenarios. In addition, even if the test cases can fully include all the application scenarios, the scale of a full-chip post-layout simulation net-list (RCC SPF) including a parasitic coupling capacitor is very large (greater than 20 G), a simulation time is very long, and thus, from the time point of view, an actual project requirement cannot be met. In addition, a pulse signal outputted through a full-chip layout is affected by the crosstalk effect, and may also be affected by other factors. Various embodiments of the present disclosure can address how to test the crosstalk effect of the integrated circuit. In recent years, with the development of integrated circuits, metal layers of the integrated circuits have increased, line widths have decreased, and circuit densities have increased, such that a crosstalk effect of the integrated circuits is more and more serious, ultimately affecting the quality of output signals of the integrated circuits. In order to improve the quality of the output signals of the integrated circuits, it is necessary to analyze and overcome the crosstalk effect of the integrated circuits. Based on this, a researcher estimates an influence of the crosstalk effect on the performance of the circuit by performing post-layout simulation on the integrated circuit through software in the design process of the integrated circuit. The post-layout simulation includes extracting parasitic parameters and adding the parasitic parameters into simulation. However, in fact, the influence of the crosstalk effect is related to the magnitude of parasitic capacitance and is also related to an input test case. The input test case is different, and a final result is also different. At present, the test cases of the integrated circuit subjected to the post-layout simulation cannot fully include all the application scenarios. In addition, even if the test cases can fully include all the application scenarios, the scale of a full-chip post-layout simulation net-list (RCC SPF) including a parasitic coupling capacitor is very large (greater than 20 G), a simulation time is very long, and thus, from a time point of view, an actual project requirement cannot be met. In addition, a pulse signal outputted through a full-chip layout is affected by the crosstalk effect, and can also be affected by other factors. As such, the following three problems may exist when testing the crosstalk effect of the integrated circuit. (1) Existing parasitic parameters extracted by post-layout simulation are comprehensive and further include other parasitic parameters besides the crosstalk effect. (2) The crosstalk effect is related to the test case, i.e., the phase of an interference signal is different from that of an interfered signal, and a rise time and a fall time of the interference signal are different, which will cause the rise time and the fall time of the interfered signal to be different. It is impossible to cover all test cases. Therefore, the present disclosure simulates the case of a maximum interference effect according to an actual project condition. (3) Because the parasitic parameters extracted by post-layout simulation are comprehensive, the scale of a net-list of post-layout simulation is very large, resulting in a very long simulation time, and it is not beneficial for the process of design. Based on this, the present disclosure provides an accurate and efficient method, circuit and apparatus for testing crosstalk effect, being suitable for actual project design and development. The crosstalk effect test circuit that is created according to the method for testing crosstalk effect can only simulate a crosstalk environment where an integrated circuit under test is actually located, and eliminate the influences of other parasitic parameters in the integrated circuit under test. A test signal is inputted to the crosstalk effect test circuit and then an interfered signal is obtained. The interfered signal can only reflect a change when the test signal is affected by the crosstalk effect. A terminal device can accurately determine the crosstalk effect of the integrated circuit under test through the interfered signal. Referring toFIG.1, embodiment I of the present disclosure provides a crosstalk effect test circuit10, including a first circuit100, N second circuits200, and N capacitors300. N is an integer greater than 0. As shown inFIG.1, the first circuit100is configured to simulate an interfered first signal circuit in the integrated circuit under test, and the N second circuits200are configured to simulate N second signal circuits that interfere with the first signal circuit in the integrated circuit under test. The second signal circuit is a signal circuit that has a crosstalk effect interference with the first signal circuit. The crosstalk effect occurs between the signals having the same wiring direction. The closer the two signal lines are, the greater the crosstalk effect between the two signal lines. Therefore, the second signal circuit is a signal line having the same wiring direction as the first signal circuit and adjacent to the first signal circuit. The number of the second signal circuits can be selected according to an actual requirement. Optionally, the second signal circuits can be the first five signal circuits that are likely to cause greater interference with the first signal circuit. The N second signal circuits are N signal circuits that have relatively large interference and are extracted from all the signal circuits interfering with the first signal circuit. The selection of the second signal circuit is related to a coupling capacitance value between each signal circuit and the first signal circuit. Optionally, the coupling capacitance value between the second signal circuit and the first signal circuit exceeds a preset capacitance value. The preset capacitance value can be set by a researcher according to an actual condition. Optionally, the coupling capacitance values between different signal circuits and the first signal circuit can be extracted by a layout parasitic effect extraction tool, for example, extracting the coupling capacitance value by Star-RC software. Many coupling capacitors and many interference circuits (the second signal circuits) exist in the integrated circuit under test, and cannot be fully simulated in the crosstalk effect test circuit. The more carefully the crosstalk effect test circuit simulates the integrated circuit under test, the more accurate an obtained crosstalk effect test result of the integrated circuit under test is, and the higher the accuracy of a correspondingly obtained interfered signal is. However, this also consumes a lot of simulation test time. Therefore, the present disclosure selects to place the coupling capacitor having the greatest influence on the first signal circuit in the crosstalk effect test circuit, so as to balance test accuracy and a test time. Specifically, the N capacitors300simulate the coupling capacitors between the first signal circuit and the N second signal circuits. In the crosstalk effect test circuit10, one pole plate of the capacitor300is connected to the first circuit100, the other pole plate of the capacitor300is connected to the second circuit200, and a capacitance value of the capacitor is determined according to an extracted coupling capacitance value between the second signal circuit and the first signal circuit. As shown inFIG.1, the N capacitors300respectively are capacitor CC1, capacitor CC2, capacitor CC3, capacitor CC4, and capacitor CC5that are five simulated coupling capacitors between the second signal circuits and the first signal circuit. A capacitance value corresponding to the capacitor CC1is a maximum capacitance value among the five coupling capacitors, and the capacitance values corresponding to the capacitor CC1, the capacitor CC2, the capacitor CC3, the capacitor CC4, and the capacitor CC5are sequentially decreased. An input end of the first circuit100is configured to receive the test signal. An input end of the second circuit200is configured to receive an interference input signal, and an output end of the second circuit is configured to output the interference signal. When the second circuit200has the inflow of the interference input signal, a tested signal is affected by the interference signal received by the capacitor300and then becomes the interfered signal. An output end of the first circuit outputs the interfered signal. The interfered signal can reflect the influence of the crosstalk effect on the test signal, and further reflects the influence of the crosstalk effect on the first signal circuit in the integrated circuit under test. The crosstalk effect test circuit10provided by this embodiment can simulate the integrated circuit under test, and the coupling capacitor interfering with the first signal circuit in the integrated circuit under test is correspondingly provided on the capacitor300of the crosstalk effect test circuit10, and the capacitor300connected between the first circuit100and the second circuit200is used to simulate an influence of the coupling capacitor on the first signal circuit. Hence, the crosstalk effect test circuit10only extracts the coupling capacitor and eliminates the influences of other parasitic parameters other than the crosstalk effect. In this way, the crosstalk effect determined by the interfered signal is more accurate. In addition, the crosstalk effect test circuit10does not need to occupy a lot of simulation time, which is also beneficial for the development of integrated circuit design. Referring toFIG.2, embodiment II provided by the present disclosure further describes the first circuit100and the second circuit200on the basis of embodiment I. The first circuit100includes an inverting unit110, a first driving unit120, and a first load unit130. The second circuit200includes a second driving unit210and a second load unit220. An input end of the inverting unit110is configured to receive the test signal, perform inverting processing on the test signal and then output an inverted test signal. An input end of the first driving unit120is connected to an output end of the inverting unit110, and an output end of the first driving unit120outputs the interfered signal. An input end of the first load unit130is connected to the output end of the first driving unit120. An input end of the second driving unit210is configured to receive the interference input signal, and an output end of the second driving unit210is configured to output the interference signal. An input end of the second load unit220is connected to the output end of the second driving unit210. One pole plate of the capacitor300is connected to the output end of the second driving unit210and connected to the input end of the second load unit220, and the other pole plate of the capacitor300is connected to the first circuit100. Specifically, the other pole plate of the capacitor300is connected to the output end of the first driving unit120and connected to the input end of the first load unit130. As shown inFIG.2, one pole plate of the capacitor CC1is connected between the second driving unit210and the second load unit220, and the other pole plate of the capacitor CC1is connected between the first driving unit120and the first load unit130. The first circuit100includes the inverting unit110, but the second circuit does not include the inverting unit. Therefore, when the test signal and the interference input signal are pulse signals of the same phase, the test signal is inputted into the first circuit100, and the interference input signal is simultaneously inputted into the second circuit200, the interfered signal is in inverted phase from the interference signal, and the crosstalk effect has the greatest influence on the rise time and the fall time of the interfered signal. Optionally, the inverting unit110can be a typical inverter unit consisting of a core device in a process digital standard cell library. The inverting unit110has a main function of producing a test case of the crosstalk effect test circuit10, i.e., the test case at the time of having a strongest crosstalk effect. The researcher can produce different test cases of the crosstalk effect test circuit10by adjusting the rise time and the fall time of the interference input signal. Optionally, the rise time and the fall time of the interference input signal both are a preset time period, i.e., 20 picoseconds (referred to as ps). It is found through an experiment that when the rise time and the fall time of the interference input signal both are 20 ps, the interference signal outputted by the second driving unit has a shortest rise time and a shortest fall time, N interference signals almost act on the interfered signal at the same moment, and the interfered signal and the interference signal have the same frequency and are inverted, the interference signal has a strongest crosstalk effect on the interfered signal, and the rise time and the fall time of the obtained interfered signal are most affected by the crosstalk effect. The crosstalk effect test circuit10provided by this embodiment can output the interfered signal at the time of having a greatest crosstalk effect, thereby helping the researcher to analyze the change of the output signal of the integrated circuit under test at the time of having the greatest crosstalk effect. Referring toFIG.3, embodiment III of the present disclosure provides a crosstalk effect test circuit10. A control switch400is added on the basis of the crosstalk effect test circuit10provided by embodiment I or embodiment II. One end of the control switch400is connected to one pole plate of the capacitor300, the other end of the control switch400is connected to the first circuit100or the second circuit200, and the control switch400is connected in series to the capacitor300. Optionally, the crosstalk effect test circuit10provided by this embodiment includes N control switches400. The N control switches400are arranged corresponding to the N capacitors300. As shown inFIG.3, one end of a control switch SwitcM is connected to the capacitor CC1, the other end of the control switch SwitcM is connected to a second circuit L1, and the second circuit L1is the second circuit200connected to the capacitor CC1. Optionally, the other end of the control switch SwitcM can also be connected to the first circuit100. One end of a control switch Switch2is connected to the capacitor CC2, and the other end of the control switch Switch2is connected to the first circuit100. Optionally, the other end of the control switch Switch2can also be connected to a second circuit L2. The second circuit L2is the second circuit200connected to the capacitor CC2. By analogy, one end of a control switch Switch5is connected to the capacitor CC5, and the other end of the control switch Switch5is connected to the first circuit100. Optionally, the other end of the control switch Switch5can also be connected to a second circuit L5. The second circuit L5is the second circuit200connected to the capacitor CC5. The control switch400is mainly configured to control two different working modes of the crosstalk effect test circuit10. One working mode is that when all the control switches400in the crosstalk effect test circuit10are turned off, the N second circuits200and the N capacitors300have no the influence of the crosstalk effect on the interfered signal. The other working mode is that when some or all of the control switches400in the crosstalk effect test circuit100are turned on, the interfered signal is interfered by the crosstalk effect. The comparison of the rise time and the fall time of the interfered signal outputted by the first circuit100at the two working modes can reflect the change of the interfered signal when being interfered by the crosstalk effect. Optionally, in the other working mode, the interference between different second circuits200and the first circuit100can be determined by turning on different control switches400. For example, the control switch SwitcM is turned on, and other control switches, such as the control switch Switch2, the control switch Switch3, the control switch Switch4, and the control switch Switch5, are turned off. In this case, the crosstalk effect existing between the second circuit L1and the first circuit L0can be obtained according to the differences of the rise time and the fall time of the interfered signal at the two working modes. The crosstalk effect test circuit10provided by this embodiment adds the control switch400on the basis of the crosstalk effect test circuit10provided by embodiment I or embodiment II. The control switch400can control the crosstalk effect test circuit10to enter the two different working modes. Moreover, by means of the turn-on and turn-off of the control switch400, the crosstalk effects between different second circuits200and the first circuit100can be tested, or the crosstalk effects between a plurality of second circuits200and the first circuit100can be tested. The crosstalk effect test circuit10provided by this embodiment makes it more convenient to change a test mode when researchers perform a crosstalk effect test, such that the crosstalk effect test results are richer and more targeted. The method for testing crosstalk effect provided by embodiment IV of the present disclosure is applied to the terminal device. The terminal device is a device, such as a laboratory-specific server, a computer, or a mobile phone. FIG.4is a schematic view of an application scenario of a method for testing crosstalk effect provided by the present disclosure. The terminal device can simulate the crosstalk effect of the integrated circuit under test to generate the crosstalk effect test circuit10. Various input boxes and keys such as a test signal parameter setting box, an interference input signal parameter setting box, a test result recording key, and a test result saving key shown inFIG.4can further be configured on the terminal device. The researcher can set a parameter of the test signal and a parameter of the interference input signal on the terminal device, then input the test signal and the interference input signal into the crosstalk effect test circuit10by controlling keys to obtain the interfered signal, and analyze the interfered signal and then determine a test result of the crosstalk effect of the integrated circuit under test. Referring toFIG.5, the method for testing crosstalk effect includes the following steps. In S510, the test signal and the interference input signal are obtained. The test signal and the interference input signal can be set by a worker on the terminal device. Preferably, the interference input signal can be an approximate ideal pulse signal. That is, the rise time and the fall time of the interference input signal can be 20 ps. In S520, the test signal and the interference input signal are inputted into the crosstalk effect test circuit obtained by simulation, so as to obtain the interfered signal. The crosstalk effect test circuit obtained by simulation is the crosstalk effect test circuit10. The crosstalk effect test circuit10can be a test circuit as described in embodiment I, or embodiment II, or embodiment III. The crosstalk effect test circuit10is a simulation circuit provided on the terminal device. When the terminal device is in the crosstalk effect test circuit10that is obtained by simulation as described in embodiment I, the terminal device can simulate the first circuit100according to the first signal circuit, and simulate the N second circuits200according to the N second signal circuits. The terminal device then obtains the coupling capacitance values between the second signal circuits and the first signal circuit, and creates the capacitors300according to the coupling capacitance values. Specifically, the terminal device selects the N capacitors to be connected to the crosstalk effect test circuit according to the capacitance values of the coupling capacitors between the first signal circuit and the N second signal circuits. When the terminal device is in the crosstalk effect test circuit10that is obtained by simulation as described in embodiment II, the first circuit100includes the inverting unit110, the first driving unit120, and the first load unit130. The input end of the inverting unit110is configured to receive the test signal, perform inverting processing on the test signal and then output the inverted test signal. The input end of the first driving unit120is connected to the output end of the inverting unit110, and the output end of the first driving unit120is configured to output the interfered signal. The input end of the first load unit130is connected to the output end of the first driving unit120. The second circuit200includes the second driving unit210and the second load unit220. The input end of the second driving unit210is configured to receive the interference input signal, and the output end of the second driving unit220is configured to output the interference signal. The input end of the second load unit220is connected to the output end of the second driving unit210. One pole plate of the capacitor300is connected to the output end of the second driving unit210and connected to the input end of the second load unit220, and the other pole plate of the capacitor300is connected to the first circuit100. When the terminal device is in the crosstalk effect test circuit10that is obtained by simulation as described in embodiment III, the terminal device creates the control switches400. One end of each control switch400is connected to one pole plate of the capacitor300, and the other end of the control switch400is connected to the first circuit100or the second circuit200. The control switch400is connected in series to the capacitor300. The crosstalk effect test circuit10includes the N second circuits200, can input the interference input signal to each second circuit200, and can also input the interference input signal to some of the N second circuits200. If the method provided by this embodiment is applied to the crosstalk effect test circuit10provided by embodiment III, when the control switch400is turned on, the interference signal outputted by the second circuit200causes interference with the interfered signal of the first circuit100. When the control switch400is turned off, the second circuit200does not generate signal interference with the first circuit100. In S530, when the rise time of the interfered signal or the fall time of the interfered signal is greater than the preset time threshold, it is determined that the excessive crosstalk effect exists in the integrated circuit under test. The longer the rise time or the fall time of the interfered signal is, it is proved that the more serious the crosstalk effect existing in the integrated circuit under test is. The preset time threshold can be set according to an actual condition. The preset time threshold corresponding to different integrated circuit under tests can also be different. If the preset time threshold is 1 nanosecond, when the rise time or the fall time of the interfered signal is greater than 1 nanosecond, it is determined that the excessive crosstalk effect exists in the integrated circuit under test. The terminal device can store the rise time and the fall time of a plurality of interfered signals. A storage mode can be selected according to an actual requirement, and the present disclosure does not define this. Referring toFIG.6, embodiment V of the present disclosure provides a crosstalk effect test apparatus30, including an obtaining module31, an inputting module32, and a processing module33. The obtaining module31is configured to obtain the test signal and the interference input signal. The test signal and the interference input signal are pulse signals of the same phase, and the rise time and the fall time of the interference input signal both are preset times. The inputting module32is configured to input the test signal and the interference input signal into the crosstalk effect test circuit obtained by simulation, so as to obtain the interfered signal. The crosstalk effect test circuit includes the first circuit, the N second circuits, and the N capacitors. The first circuit is configured to simulate the interfered first signal circuit in the integrated circuit under test, the input end of the first circuit is configured to receive the test signal, and the output end of the first circuit is configured to output the interfered signal. The N second circuits are configured to simulate the N second signal circuits that interfere with the first signal circuit in the integrated circuit under test, N is an integer greater than 0, the input end of each second circuit is configured to receive the interference input signal, and the output end of the second circuit is configured to output the interference signal. One pole plate of each of the N capacitor is connected to the first circuit, the other pole of the capacitor is connected to the second circuit, and the capacitance value of the capacitor is determined according to the measured coupling capacitance value between the second signal circuit and the first signal circuit. The processing module33is configured to determine that the excessive crosstalk effect exists in the integrated circuit under test when the rise time of the interfered signal or the fall time of the interfered signal is greater than the preset time threshold. The crosstalk effect test apparatus30further includes: a simulating module34, configured to simulate the first circuit according to the first signal circuit, simulate the N second circuits according to the N second signal circuits, obtain the coupling capacitance values between the second signal circuits and the first signal circuit, and create the capacitors according to the coupling capacitance values. The first circuit includes: the inverting unit, the input end of the inverting unit being configured to receive the test signal, perform inverting processing on the test signal and then output the inverted test signal; the first driving unit, the input end of the first driving unit being connected to the output end of the inverting unit, and the output end of the first driving unit being configured to output the interfered signal; and the first load unit, the input end of the first load unit being connected to the output end of the first driving unit. The second circuit includes: the second driving unit, the input end of the second driving unit being configured to receive the interference input signal, and the output end of the second driving unit being configured to output the interference signal; and the second load unit, the input end of the second load unit being connected to the output end of the second driving unit. One pole plate of the capacitor is connected to the output end of the second driving unit and connected to the input end of the second load unit, and the other pole plate of the capacitor is connected to the first circuit. The simulating module34is further configured to create the control switches. One end of each control switch is connected to one pole plate of the capacitor, and the other end of the control switch is connected to the first circuit or the second circuit. The control switch is connected in series to the capacitor. The simulating module34is further configured to select the N capacitors to be connected to the crosstalk effect test circuit to simulate the N second circuits according to the capacitance values of the coupling capacitors between the first signal circuit and the plurality of second signal circuits. Referring toFIG.7, embodiment VI of the present disclosure further provides a terminal device40, including a memory41, a processor42, and a transceiver43. The memory41is configured to store an instruction. The transceiver43is configured to communicate with other devices. The processor42is configured to execute the instruction stored in the memory41, such that the terminal device40implements the method for testing crosstalk effect provided by embodiment IV above. The specific implementations and the technical effect are similar and are not described here again. The present disclosure further provides a computer-readable storage medium. The computer-readable storage medium stores a computer executable instruction. When the instruction is executed, the computer executable instruction is executed by the processor and configured to implement the method for testing crosstalk effect provided by embodiment IV above. The specific implementations and the technical effect are similar and are not described here again. The present disclosure further provides a computer program product, including a computer program. When the computer program is executed by the processor, the method for testing crosstalk effect provided by embodiment IV above is implemented. The specific implementations and the technical effect are similar and are not described here again. It should be noted that the aforementioned computer-readable storage medium can be a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a Ferromagnetic Random-Access Memory (FRAM), a flash memory, a magnetic surface memory, an optical disk, or a Compact Disc Read-Only Memory (CD-ROM). The computer-readable storage medium can also be any electronic device including one or any combination of the aforementioned memories, such as a mobile phone, a computer, a tablet device, or a personal digital assistant. It should be noted that the terms “include”, “comprise” or any other variants thereof are intended to cover non-exclusive inclusion, such that a process, method, article or device including a series of elements includes not only those elements but also other elements that are not explicitly listed, or further includes elements inherent to the process, method, article, or apparatus. Under the condition of no more limitations, an element defined by the statement “including one . . . ” does not exclude that there are other identical elements in the process, method, article or apparatus including the element. The sequence numbers of the aforementioned embodiments of the present disclosure are merely for description and do not indicate the preference of the embodiments. By means of the description of the foregoing implementations, a person skilled in the art can clearly know that the method according to the foregoing embodiments can be implemented by software and a necessary general-purpose hardware platform, and can also be implemented by the hardware, but in many cases, the former is a better implementation. Based on such an understanding, the technical solutions of the present disclosure or a part thereof contributing to the prior art may be essentially embodied in the form of a software product. The computer software product is stored in one storage medium (such as a ROM/RAM, a floppy disk, or an optical disc) and includes several instructions such that one computer device (which may be a mobile phone, a computer, a server, a conditioner, a network device, and the like) implements the method according to the embodiments of the present disclosure. The present disclosure is described with reference to flowcharts and/or block diagrams of methods, devices (systems), and computer program products of the embodiments of the present disclosure. It should be understood that a computer program instruction is configured to implement each flow and/or block in the flowcharts and/or block diagrams, and the combination of flows/blocks in the flowcharts and/or block diagrams. These computer program instructions may be provided to a universal computer, a special computer, an embedded processor or processors of other programmable data processing devices to generate a machine such that an apparatus for implementing the functions specified in one or more flows in the flowcharts and/or one or more blocks in the block diagrams is generated through the instructions executed by the computer or the processors of other programmable data processing devices. These computer program instructions may also be stored in a computer readable memory that can direct the computer or other programmable data processing devices to work in a particular manner such that the instruction stored in the computer readable memory generates a product including an instruction apparatus, which implements the functions specified in one or more flows in the flowchart and/or one or more blocks in the block diagram. These computer program instructions may also be loaded in a computer or other programmable data processing devices such that a series of operation steps are executed on the computer or other programmable data processing devices to generate computer implemented processing, and thus, the instruction executed on the computer or other programmable data processing devices provides the steps for implementing the functions specified in one or more flows in the flowchart and/or one or more blocks in the block diagram. The above are merely preferred embodiments of the present disclosure and thus do not limit the patent scope of the present disclosure. Any equivalent structure or equivalent process transformation made by using the content of the description and the accompanying drawings of the present disclosure, or any direct or indirect application of the present disclosure in other related technical fields shall all be included in the scope of patent protection of the present disclosure.
35,132
11860223
DETAILED DESCRIPTION In the ensuing description, one or more specific details are illustrated, aimed at providing an in-depth understanding of examples of embodiments of this description. The embodiments may be obtained without one or more of the specific details, or with other methods, components, materials, etc. In other cases, known structures, materials, or operations are not illustrated or described in detail so that certain aspects of embodiments will not be obscured. Reference to “an embodiment” or “one embodiment” in the framework of the present description is intended to indicate that a particular configuration, structure, or characteristic described in relation to the embodiment is comprised in at least one embodiment. Hence, phrases such as “in an embodiment” or “in one embodiment” that may be present in one or more points of the present description do not necessarily refer to one and the same embodiment. Moreover, particular conformations, structures, or characteristics may be combined in any adequate way in one or more embodiments. The references used herein are provided merely for convenience and hence do not define the extent of protection or the scope of the embodiments. A BIST circuit architecture may include an I/Q image rejection mixer. This architecture may be notionally able to generate a SSB (Single-Side Band) signal with characteristics adapted for the calibration of radar sensor IC. In such an arrangement image signal rejection is proportional to I/Q phase and amplitude accuracy. For high-frequency ICs, the conventional generation of I and Q signals is complex and generally not accurate enough for an image rejection architecture. This technical drawback limits the use of loop-back solutions for, e.g., high-frequency applications such as radar applications. A calibration procedure may improve radar sensor performance. An approach in that respect may involve measuring calibration data, e.g., using well-known RF test signals and modulation schemes and storing them in the sensor at one temperature, for instance during an end-of-line test. During sensor operation, the calibration data may be used by target detection algorithms for compensating, e.g., silicon IC process variations. Achieving good performance of a sensor can be facilitated by IC performance being as much stable as possible versus, e.g., frequency, temperature, and aging. Run-time procedures for updating calibration data may thus be helpful. FIG.1shows a (simplified) exemplary block diagram of a radar sensor (e.g., including a radar sensor IC)10by referring to the exemplary case of an, e.g., FMCW radar sensor10capable of detecting an object O at a distance (range) d. Such a sensor may include a RF frequency synthesizer12generating a local oscillator signal TX/LO fed to a (transmitter) variable gain amplifier (VGA)14. The VGA may in turn feed a power amplifier (PA)16driving a transmission (TX) antenna20. A corresponding incoming (echo) signal received at a receiver (RX) antenna22may be fed via a RF coupler circuit24to a low noise amplifier (LNA)26and on to a mixer circuit28fed with the local oscillator signal TX/LO to produce a down-converted intermediate frequency (IF) signal, which in turn is fed to a (receiver) variable gain amplifier (VGA)30. A RF Built-In-Self-Test (BIST) block32may generate a RF test signal (with known characteristics) which may be fed to the high frequency stage24to reproduce (simulate) an echo radar signal. Such a signal may have, e.g., the following characteristics:Single-Side Band (SSB) signalfrequency modulationcoherence with local oscillator (TX/LO) signalvariable frequencyinjection at the input of the receiver (e.g., RF coupler stage24). Calibration procedures applied to a circuit layout as exemplified inFIG.1may be able to fix both systematic and random errors as revealed by a RF test signal. Such RF test signals may thus be useful in radar sensor (e.g., radar sensor IC) auto-diagnostics and calibration procedures, e.g., with respect to hardware fault and performance improvement. For instance, a RF test signal from the BIST block32inFIG.1may be used to simulate an echo signal from the radar sensor IC as depicted inFIG.2: e.g., during radar sensor calibration, IC malfunction leading to unwanted spurious Doppler shifts (and thus spurious range shifts) may be detected, e.g., by analyzing the FFT of the base band IF signal generated (e.g., at28inFIG.1) by injecting at the receiver input (e.g., at RF coupler24) the (known) RF test signal RFTTEST. The diagrams in part2A ofFIG.2are exemplary of a possible behavior over time (abscissa scale) of the frequency (ordinate scale) of transmitted and received signals TX and RX varying with a (modulation) bandwidth BW over a time Ts with a Doppler shift DS and a range shift RS. The diagram in part2B ofFIG.2is exemplary of a possible time behavior of a corresponding IF signal with a frequency fIF=|fRX−fTX|. In one or more embodiments, a RF test signal may be generated by resorting to the BIST architectures exemplified inFIG.3orFIG.4. In bothFIGS.3and4, the left-hand dashed area is exemplary of a frequency generator120(e.g., for a radar sensor IC). In one or more embodiments, a simple implementation of such a generator120may include a voltage-controlled oscillator (VCO)122(see the frequency synthesizer/generator12inFIG.1) and a frequency divider (:N)124acting on the output from the oscillator122to produce a frequency-divided signal fDIV. In one or more embodiments, the oscillation frequency of the output signal of the oscillator122(which may correspond to the signal TX/LO of the diagram ofFIG.1) may be controlled using a tuning signal (e.g., a voltage signal VFINE, e.g., from a modulator122a). In one or more embodiments, the frequency of the output signal of the oscillator122may controlled by “finely tuning” with the signal VFINEfrom the modulator122aa coarser signal VCOARSEas derived e.g. from a digital-to-analog converter (DAC)122b. In one or embodiments, the modulator122aand the DAC122bmay be external elements to an IC as exemplified herein. In one or more embodiments a radar sensor (micro) controller circuit MC may control various components/parts of, e.g., a radar sensor IC as exemplified in the figures. In order to avoid making the graphical representation unnecessarily complex, the possible control action of the controller MC is represented in the figures as an arrow pointing into a certain component/part. For instance, the controller MC may detect (measure) the oscillation frequency from the output of the frequency divider124during a calibration time and produce a desired modulation scheme—e.g., chirps—as in the radar output signal. In one or more embodiments, a frequency generator120as exemplified herein may include additional/more complex circuits, such as, e.g.:integrated modulator(s);an integrated DAC, e.g., to reduce the VFINEvoltage sensitivity;a fully integrated N-fractional or N-integer PLL (see below). Operation of one or more embodiments may include the above possible implementation details and may, for example, employ on two signals:a local oscillator signal TX/LO which may be transmitted using the power amplifier chain (see, e.g., blocks14and16inFIG.1) and distributed for down-conversion to IF (see, e.g., the mixer28ofFIG.1);a frequency-divided signal fDIVas available, e.g., at the output of the divider124. In one or more embodiments as exemplified inFIG.3, the frequency-divided signal fDIVmay be used for monitoring a RF test signal generated by a (further) fine-tuned oscillator. In one or more embodiments as exemplified inFIG.4, the frequency-divided signal fDIVmay be used for driving a PLL circuit including an oscillator which generates the RF test signal. One or more embodiments may thus involve:applying frequency division (e.g., at124) to a local oscillator signal (e.g., TX/LO) to produce a frequency-divided signal (e.g., fDIV),providing a signal generator for generating a self-test signal RFTEST, andgenerating the self-test signal RFTESTby operating a signal generator (222inFIGS.3—320ainFIG.4) with operation of said generator monitored (FIG.3) or controlled (FIG.4) with the frequency-divided signal. Stated otherwise, in more embodiments, generating the self-test signal RFTESTmay involve monitoring or controlling operation of a corresponding generator based on the frequency-divided signal. FIG.3shows an exemplary open-loop architecture of a RF test signal generator32according to one or more embodiments, wherein the frequency-divided signal fDIVat the output of the divider124is used to monitor a RF test signal generated by oscillator222. In one or more embodiments as exemplified inFIG.3, the IF output signal intended to simulate target detection may be obtained by setting a frequency shift between the RF test signal RFTESTand the TX/LO signal by using DACs on tuning voltages. In one or more embodiments as exemplified inFIG.3, the first DAC122bmay provide a coarse tuning voltage VCOARSEboth to the oscillator122(fine tuned by means VFINEfrom the modulator122ato provide the local oscillator signal TX/LO) and another oscillator (e.g., VCO)222, fine tuned via a further DAC222a. Both oscillators122and222being (digitally) controlled by using a common DAC, that is122b, may facilitate compensating oscillation frequency drifts due to temperature and silicon process variations. In one or more embodiments, respective frequency dividers124,224(e.g., by a same factor N) may be coupled to the outputs of the oscillators122,222with the frequency-divided outputs fDIV, fDIV_AUXfrom the dividers124,224fed to a frequency counter226(clocked by a clock signal fCLK) which provides a test flag signal over a line226ato the microcontroller MC. In one or more embodiments, such a test flag may be generated—during a calibration phase—when both oscillators122,222are oscillating at expected frequencies due to the microcontroller MC controlling the fine tuning voltage VFINE_AUXof the (auxiliary) oscillator222via the DAC222awhile the fine tuning voltage VFINEof the (main) oscillator122may be managed by the microcontroller MC, e.g., via the modulator122a. In one or more embodiments, operation of the frequency counter226may involve comparing the frequencies of the frequency-divided fDIVand fDIV_AUXand determining that the oscillators122,222are oscillating at expected frequencies when, e.g., the ratio of the frequencies of fDIVand fDIV_AUXreaches a certain value: in that respect it will be appreciated that fDIVand fDIV_AUXmay be generated by oscillators122,222oscillating at respective frequencies such as foscand fosc+fIFand/or that the dividers124,224need not necessarily have identical division factors (e.g., N). In one or more embodiments, the oscillator222may be coupled to a variable gain amplifier (VGA)228to provide the RF test signal RFTESTwith a level (possibly monitored with a power detector228a) adapted to be fed to the stage24(seeFIG.1). One or more embodiments as exemplified inFIG.3may thus exhibit one or more of the following features:the RF test signal RFTESTmay be generated using an auxiliary oscillator, e.g., a VCO (222);the frequency shift between the RF test signal RFTESTand the TX/LO signal may be properly set using, e.g., DACs on the VCO tuning voltages;a coarse tuning voltage of both VCOs122and222may be digitally controlled using a common DAC (122b) to compensate the oscillation frequency drift due to both silicon process and temperature variations;the fine tuning voltage VFINE_AUXof the auxiliary oscillator222may be digitally controlled by an additional DAC222awhile the fine tuning voltage of oscillator122may be managed by the microcontroller MC during the calibration phase, e.g., via the block122a;the RF test signal RFTESTmay be a continuous wave (CW) signal with its frequency fRXgenerated according to the following equation: fRX=fTX/LO+fIFwhere fTX/LOand fIFare the frequencies of the local oscillator signal TX/LO and the intermediate frequency signal IF, respectively;the difference between the oscillation frequencies of the oscillator122and the oscillator222may set the frequency fIFof the IF signal;the microcontroller MC may set the desired IF frequency by changing the fine tuning voltages of the oscillators122,222through the associated (integrated) DACs;the accuracy in setting the IF frequency may be a function of the resolution used in controlling the fine tuning voltages VFINE, VFINE_AUXof the oscillators122,222;these fine tuning voltages may be modified by the microcontroller MC until the desired IF frequency is set with an expected accuracy and the test flag signal issued by the counter226on the line226abecomes true;the generated RF test signal RFTESTmay be a replica of the transmitted signal adapted to be injected to the receiver input (e.g., at24inFIG.1) reproducing the echo radar signal;the RF test signal RFTESTdown-converted with the TX/LO signal may generate an IF output signal, which may reproduce target detection;programmable frequency dividers (e.g., by a same factor N) may be used at124,224to generate the frequency-divided signals fDIVand fDIV_AUX; it will be appreciated that these signals may not be identical insofar as they are generated by oscillators122,222oscillating at respective frequencies such as foscand fosc+fIF;the frequency-divided signals fDIVand fDIV_AUXmay be used by a frequency counter such as226to provide a test flag signal (true/false signal) on a line226a, e.g., to the microcontroller MC when both oscillators122,222are oscillating at desired frequencies with the expected accuracy;the power level of RF test signal RFTESTmay be set using a variable gain amplifier (VGA), e.g.,228which may be detected using a power detector circuit, e.g.,228a;both the test flag signal on line226aand the power detector circuit228amay facilitate making the circuit compliant with ISO26262 standard;the BIST circuit32may be disabled during the normal operation. FIG.4shows an exemplary PLL-based architecture of a RF test signal generator32according to one or more embodiments. In one of more embodiments a RF test signal generator32, intended to provide a RF test signal to be applied, e.g., to the stage24ofFIG.1, may include a PLL (Phase Locked Loop) circuit320, operating, e.g., by “locking” an auxiliary, e.g., voltage-controlled oscillator (VCO)320ausing as an input the frequency divided signal fDIVfrom the divider124, possibly delayed by a (controllable) delay322to produce a delayed version of fDIVnamely fREF. According to an otherwise conventional PLL layout, the circuit320may include, in addition to the oscillator320a, an input comparator320bwhich receives fREFand the frequency from the oscillator320avia a PLL divider320c. The result of the (frequency) comparison in the input comparator320bdrives the oscillator320avia a loop filter320d. The PLL320having reached a lock condition may be detected by a lock detector326which may issue over a line326aa test flag to the microcontroller MC, thus making the IC arrangement exemplified herein compliant with, e.g., the ISO26262 standard. In one or more embodiments, the oscillator320ain the PLL320may be followed by a variable gain amplifier (VGA)328to provide the receiver with a RF test signal having a level (possibly monitored with a power detector328a) adapted to be fed to the stage24(seeFIG.1). In one or more embodiments, the PLL divider320cmay permit to change the RF test signal frequency, e.g., by programming the PLL frequency divider320c. The frequency shift between the RF test signal and the local oscillator signal TX/LO may generate an IF output signal (e.g., at the output of the mixer stage28ofFIG.1), which simulates target detection (see, e.g.,FIG.2). In one or more embodiments, the amplitude, frequency and phase of IF output signal may be exploited (in a manner known per se) for calibration of the radar sensor (e.g., of the radar sensor IC). In one or more embodiments, the delay possibly applied (e.g., at322) to the divided signal fDIVto produce fREFmay permit to obtain a well-defined delay time between the RF test signal and the TX/LO signal. In one or more embodiments, the PLL circuit320may follow the frequency modulation applied to the transmitted signal. In one or more embodiments, the two oscillators (e.g., VCOs)122and320amay be designed to oscillate at different frequencies to reduce any “pulling” effect (e.g., the VCO122may include a core oscillating at half the operating frequency and followed by a frequency doubler while the VCO320amay be include a core oscillating at the operating frequency, or vice versa). In one or more embodiments, the RF test signal sent to the stage24may be coherent with the TX/LO signal, which may facilitate simulating the transmitted signal, e.g., with the generated RF test signal exhibiting essentially the same characteristics of radar echo signal. In one or more embodiments, the BIST circuit including the PLL block320may be disabled (e.g., by the controller MC) during normal operation of the radar sensor. One or more embodiments as exemplified inFIG.4may thus exhibit one or more of the following features:generation of a RF test signal RFTEST;during a self-test or calibration procedure the microcontroller MC may act (e.g., on the PLL divider320c) to provide a desired IF frequency difference between the two oscillators (122,320a) to generate a frequency chirp (see, e.g., the TX signal in portion2A ofFIG.2);the frequency generator120may provide the TX/LO signal to the transmitter chain and the receiver down-converter (see, e.g.,14,16and28inFIG.1) as well as (e.g., via the divider124) the frequency-divided signal fDIVto the BIST circuit32;the frequency-divided signal fDIVmay be processed by a programmable digital delay circuit322to generate a well-defined delay time (see, e.g., the range shift effect in portion2A ofFIG.2) between the RF test signal and the TX/LO signal; the PLL circuit320may lock the auxiliary oscillator320ausing a signal fREFwhich is a delayed replica of the frequency-divided signal fDIV;the PLL circuit320may follow the frequency modulation applied to the TX/LO signal;the frequency shift between the RF test signal RFTESTand the TX/LO signal (see, e.g., the Doppler effect in portion2A ofFIG.2) may be programmed via the PLL frequency divider320c;the RF test signal RFTESTmay be made coherent with TX/LO signal, which facilitates simulating the transmitted signal;the amplitude, frequency and phase of the IF output signal may be rendered accurate, thus facilitating radar sensor calibration;the oscillators122and320amay be designed to oscillate at different frequencies to reduce the pulling effect. As in the case of one or more embodiments as exemplified inFIG.3, one or more embodiments as exemplified inFIG.4may exhibit one or more of the following features:the lock detector326may to provide a test flag signal (true/false signal) on a line326a, e.g., to the microcontroller MC when the PLL circuit320is in a locked condition;the power level of RF test signal RFTESTmay be set using a variable gain amplifier (VGA), e.g.,328, and the power level may be detected using a power detector circuit, e.g.,328a;both the test flag signal on line326aand the power detector circuit328amay facilitate making the circuit compliant with ISO26262 standard;the BIST circuit32may be disabled during normal operation. In one or more embodiments of the BIST circuit32as discussed herein the resulting RF test signal RFTESTmay have the same characteristics of echo radar signal shown. In one or more embodiments, feeding of the RF test signal may facilitate operation of various arrangements as exemplified herein. Possible arrangements of various blocks as represented inFIG.1are exemplified inFIGS.5and6, where MIMIC and PCB are schematically indicative of a Microwave/Millimeter-wave Monolithic Integrated Circuit and a Printed Circuit Board to mount the MMIC. The use of hybrid coupler, balun, microstrip and inductor between the output of the RF test signal generator32and the receiver inputs was found to influence the receiver performance. One or more embodiments may thus exploit leakage between the RF test signal coming from the output of the VGA (228inFIG.3,328inFIG.4) and the input(s) of the receiver26, e.g., by using external, PCB-hosted coupling as shown inFIG.5or MMIC-internal coupling as shown inFIG.6. It will again be appreciated that reference to a radar sensor throughout this description is merely exemplary of a possible area of application of one or more embodiments. One or more embodiments may in fact find a wide variety of applications, e.g., as exemplified in the introductory portion of this description. One or more embodiments may thus provide a method of generating a self-test signal (e.g., RFTEST) for a receiver of radiofrequency signals (e.g., a radar sensor10) wherein a local oscillator signal (e.g., TX/LO ) is generated (e.g., at122) for mixing (e.g., at28) with a reception signal (e.g.,22), the method including:applying frequency division (e.g.,124) to said local oscillator signal to produce a frequency-divided signal (e.g., fDIV),providing a signal generator (e.g.,222inFIG.3or320ainFIG.4) for generating said self-test signal, andgenerating said self-test signal by operating said signal generator with operation of said signal generator monitored (e.g., via the counter226ofFIG.3) or controlled (e.g., via the PLL circuit320ofFIG.4) via said frequency-divided signal. One or more embodiments may include:generating said local oscillator signal via a first oscillator (e.g.,122),generating via a second oscillator (e.g.,222or320a) a further oscillating signal to provide said self-test signal with operation of said second oscillator monitored or controlled via said frequency-divided signal. One or more embodiments may include:setting the frequencies of said first oscillator and said second oscillator with a common coarse tuning signal (e.g., VCOARSE), andfinely tuning the frequencies of said first oscillator and said second oscillator with respective fine tuning signals (e.g., VFINE, VFINE_AUX), at least one of said fine tuning signals (VFINE, VFINE_AUX) optionally produced by means of a digital-to-analog converter (222a). One or more embodiments may include selectively tuning (e.g., via the microcontroller MC) the frequency of said second oscillator (e.g.,222) to produce chirp modulation of said self-test signal. One or more embodiments may include:applying frequency division (e.g., at124,224) to said local oscillator signal and said further oscillating signal to produce respective frequency-divided oscillating signals (e.g., fDIV, fDIV_AUX), andmonitoring the frequency of said further oscillating signal by comparing (e.g., via the frequency counter226) said respective frequency-divided oscillating signals. One or more embodiments may include:providing a PLL circuit (e.g.,320) with an output oscillator (e.g.,320a) for generating said self-test signal, an input comparator (e.g.,320b) and a loop divider (e.g.,320c) between said output oscillator and said input comparator,supplying to said input comparator of the PLL circuit said frequency-divided signal. One or more embodiments may include supplying to said input comparator of the PLL circuit a time delayed (322, fREF) version of said frequency-divided signal. One or more embodiments may include selectively varying (e.g., vie the microcontroller MC) the division factor of said loop divider to vary the frequency of said self-test signal. One or more embodiments may provide a circuit (e.g.,120,32), including:a local oscillator for generating a local oscillator signal,at least one mixer for mixing said local oscillator signal with a reception signal,at least one frequency divider for applying to said local oscillator signal frequency division to produce a frequency-divided signal,at least one further oscillator the circuit configured for operating with the method of one or more embodiments and generating said self-test signal with operation of said signal generator monitored or controlled via said frequency-divided signal. One or more embodiments may include a Microwave/Millimeter-wave Monolithic Integrated Circuit (MMIC) on a Printed Circuit Board (PCB), the circuit including at least one coupler (e.g.,24) for coupling said self-test signal to a receiver input, wherein said at least one coupler:is hosted on said Printed Circuit Board (PCB) externally of said Microwave/Millimeter-wave Monolithic Integrated Circuit (MMIC); oris hosted internally of said Microwave/Millimeter-wave Monolithic Integrated Circuit (MMIC). One or more embodiments may provide a receiver of radiofrequency signals (e.g., a radar sensor, including a radar sensor IC) including a circuit for generating self-test signals according to one or more embodiments. In one or more embodiments such a receiver may include a radar receiver for automotive vehicles, wherein said reception signal of the receiver is an echo signal from an object at a distance from a vehicle (see, e.g., O and d inFIG.1). Without prejudice to the underlying principles, the details and the embodiments may vary, even significantly, with respect to what has been described herein made by way of example, without departing from the extent of protection. The various embodiments described above can be combined to provide further embodiments. Aspects of the embodiments can be modified, if necessary to employ concepts of the various patents, applications and publications to provide yet further embodiments. Some embodiments may take the form of or include computer program products. For example, according to one embodiment there is provided a computer readable medium including a computer program adapted to perform one or more of the methods or functions described above. The medium may be a physical storage medium such as for example a Read Only Memory (ROM) chip, or a disk such as a Digital Versatile Disk (DVD-ROM), Compact Disk (CD-ROM), a hard disk, a memory, a network, or a portable media article to be read by an appropriate drive or via an appropriate connection, including as encoded in one or more barcodes or other related codes stored on one or more such computer-readable mediums and being readable by an appropriate reader device. Furthermore, in some embodiments, some of the systems and/or modules and/or circuits and/or blocks may be implemented or provided in other manners, such as at least partially in firmware and/or hardware, including, but not limited to, one or more application-specific integrated circuits (ASICs), digital signal processors, discrete circuitry, logic gates, standard integrated circuits, state machines, look-up tables, controllers (e.g., by executing appropriate instructions, and including microcontrollers and/or embedded controllers), field-programmable gate arrays (FPGAs), complex programmable logic devices (CPLDs), etc., as well as devices that employ RFID technology, and various combinations thereof. These and other changes can be made to the embodiments in light of the above-detailed description. In general, in the following claims, the terms used should not be construed to limit the claims to the specific embodiments disclosed in the specification and the claims, but should be construed to include all possible embodiments along with the full scope of equivalents to which such claims are entitled. Accordingly, the claims are not limited by the disclosure.
27,592
11860224
DETAILED DESCRIPTION OF THE DISCLOSURE FIG.14illustrates a device1402comprising stacked die1404-1408and an interposer1410. The die in the stack may only include functional circuitry that require FIO signal connections to the substrate as described inFIG.1or they may include functional and TAP circuitry that require FIO and TIO signal connections to the substrate as described inFIGS.2and5. The interposer is similar to the previously described interposers in that it provides connectivity between the stacked die and a system substrate1412for the FIO or FIO and TIO signals. Interposer1410differs from the previously described interposers in that it is enhanced to include TAP and instrumentation circuitry (TAP&INT)1414. The interposer TAP&INT circuitry1414is connected to the substrate via interposer TAP input (ITI)1416and interposer TAP output (ITO)1418signals to allow accessing the TAP&INT circuitry. FIG.15illustrates the interposer1410TAP&INT circuitry1414in more detail. As seen the TAP&INT circuitry1414includes a TAP204and a number of instruments (I1-N)1502-1504. The TAP204receives the ITI1416inputs (TDI, TCK, TMS and optionally TRST) from the substrate1412and outputs the ITO1418output (TDO) to the substrate. The TAP may access the instruments 1-N using any of the access approaches described inFIGS.6-8. While any type of instrument may be implemented in the interposer1410, this disclosure describes non-intrusive type instruments that passively monitor activities and conditions occurring in the device using the interposer1410. FIG.16illustrates a device1602including an example interposer of the disclosure located between stacked die1604and a system substrate1412. The interposer's TAP204provides access, via interface1614, to a Monitor Trigger Unit1606, Temperature Monitors1608, Voltage & Analog Signal Monitors1610and Address & Data Bus Monitors1612. The purpose of the Monitor Trigger Unit1606is to provide control, via bus1616, to enable and operate the monitors1608-1612. The purpose of the Temperature Monitors1608is to monitor temperature conditions of the device containing the interposer1410. The purpose of the Voltage & Analog Signal Monitors1610is to monitor voltages and analog signal activity of the device containing the interposer1410. The purpose of the Address & Data Bus Monitors1612is to monitor digital signal activity of address and data busses of the device containing the interposer1410. FIG.17illustrates a device1702wherein the interposer1410of the disclosure provides a voltage bus connection (V Bus)1704, a ground bus connection (G Bus)1708and functional input and/or output (FIO) signal connections1706between a substrate1412and stacked die1604. The FIO connections can transfer digital or analog signals between the substrate and stacked die. The V Bus and G Bus connections to the substrate1412provide power and grounde to the stacked die and to circuitry (TAP and instrumentation circuitry) in the interposer1410. Multiple V Bus and G Bus connections may exist. The multiple V Bus connections may provide the same or different voltage levels. FIG.18illustrates a view of how the Monitor Trigger Unit1606and monitors1608-1612are coupled to the FIO1706connections and V & G Buses1704and1708existing in interposer1410ofFIG.17. The Monitor Trigger Unit1606has inputs coupled to functional address bus, functional data bus and functional control signals on the FIO connections1706of interposer1410. The functional control signals may include functional clock signals that time functional circuitry, functional read/write signals that time memory read and/or write operations, or other types of functional timing signals, such as but not limited too, oscillators and phase lock loop clock outputs. The Trigger Unit1606also has an input connected to an optional external trigger (XTRG) signal and inputs and an output coupled to the TDI, CTL and TDO interface1604of TAP204of interposer1410. The XTRG signal may come from the stacked die1604, the substrate1412or a circuit existing in the interposer1410. The Monitor Trigger Unit1606has a monitor control bus1616to control the operation of the monitors within interposer1410. The Address & Data Bus Monitor1612has inputs coupled to functional address and data buses on the FIO connections1706of interposer1410. The Bus Monitor1612also has inputs and an output coupled to the TDI, CTL and TDO interface1604of TAP204of interposer1410. The Bus Monitor has inputs connected to the monitor control bus1616from Trigger Unit1606. The Voltage & Analog Signal Monitor1610has inputs coupled to V bus1704, G Bus1708and functional analog signals on the FIO connections1706of interposer1410. The Voltage & Analog Signal Monitor1610also has inputs and an output coupled to the TDI, CTL and TDO interface1604of TAP204of interposer1410. The Voltage & Analog Signal Monitor has an inputs connected to the monitor control bus1616from Trigger Unit1606. The Temperature Monitor1608has inputs coupled to temperature sensors (TS)1802that may exist in the interposer1410, in the substrate1412or in the die stack1604. The Temperature Monitor1608also has inputs and an output coupled to the TDI, CTL and TDO interface1604of TAP204of interposer1410. The Temperature Monitor has inputs connected to the monitor control bus1616from Trigger Unit1606. One common type of temperature sensor1806that could be used to monitor temperatures includes a voltage divider formed by a thermister and resistor. As the temperature varies, the resistance of the thermister changes which changes the voltage output from the voltage divider. Changes in the voltage divider output can be calibrated into temperature changes. Thermocouples and other temperature measuring circuits may also be used. FIG.19illustrates monitor control bus1616of the monitor trigger unit1606connected to an N number of monitors1608-1612. The monitor control bus consists of a clock (CLK) signal, a Start signal, monitor enable signals (MENA1-N) and monitor input select (MISEL1-N) signals. The CLK signal is common to all monitors 1-N and times the operation of the monitors 1-N. The Start signal is common to all monitors 1-N and starts the operation of one or more of the monitors 1-N. The MENA1-N signals enable the operation of one or more of the monitors 1-N. Typically, but not necessarily, there will be one MENA signal for each monitor. The MISEL1-N signals control the selection of inputs on one of more monitors that have selectable inputs. “Plug and Play” Monitor Control Bus The monitor control bus1616is “plug and play” in nature in that it can be interfaced to any number and/or type of monitors that have inputs adapted for receiving and operating in response to the CLK, Start, MENA1-N and MISEL1-M signals provided by monitor trigger unit1606on monitor control bus1616. All that is required to extend the number of monitors on the monitor control bus1616is to provide a MENA signal for each monitor and MISEL signals, if necessary, to each monitor coupled to the monitor control bus1616. FIG.20Aillustrates an example implementation of Trigger Unit1606. The Trigger Unit includes an address bus comparator2002, an address multiplexer2004, a start address storage register2006, a stop address storage register2008, a data bus comparator2010, a data multiplexer2012, a start data storage register2014, a stop data storage register2016, a programmable trigger controller2018and a counter2020, all connected as shown. The address bus comparator2002inputs an address bus from FIO connections1706and compares the address to an address stored in the start2006or stop2008address registers. The address bus comparator outputs an address trigger (ATRG) to the programmable trigger controller if a match occurs between the address bus and start or stop stored addresses. Addresses are stored in the start and stop address registers by a TDI to TDO shift operation performed by the interposer's TAP204via interface1604. Multiplexer2004is controlled by a select (SEL) signal from the programmable trigger controller to determine whether the address bus is compared to the stored start or stop address. The data bus comparator2010inputs a data bus from FIO connections1706and compares the data to a data stored in the start2014or stop2016data registers. The data bus comparator outputs a data trigger (DTRG) to the programmable trigger controller if a match occurs between the data bus and start or stop stored data. Data are stored in the start and stop data registers by a TDI to TDO shift operation performed by the interposer's TAP204via interface1604. Multiplexer2012is controlled by the SEL signal from the programmable trigger controller to determine whether the data bus is compared to the stored start or stop data. The programmable trigger controller2018inputs the ATRG signal from comparator2002, DTRG signal from comparator2010, the optional XTRG signal, a count complete (CC) signal from counter2020and functional control signals from FIO connections1706. The programmable trigger controller outputs the CLK signal, the Start signal, the MENA1-N signals and the MISEL1-N of control bus1616and a counter enable (CE) signal to counter2020. The programmable trigger controller is programmed by a TDI to TDO shift operation performed by the interposer's TAP204via interface1604. The counter2020inputs the CE and CLK signals from the programmable trigger controller and outputs the CC signal to the programmable trigger controller. When enabled by CE, the counter operates for a count in response to the CLK signal. The count is loaded into the counter by a TDI to TDO shift operation performed by the interposer's TAP204via interface1604. When the count expires the counter outputs the CC signal to the programmable trigger controller. The TDI and TDO signals of the start and stop address registers2006-2008, the start and stop data registers2014-2016, the programmable trigger controller2018and the counter2020may be separately coupled to the TDI and TDO signals of the interposers TAP204interface1604so that each may be accessed individually. Alternatively, the TDI and TDO signals of the start and stop address registers2006-2008, the start and stop data registers2014-2016, the programmable trigger controller2018and the counter2020may be daisy-chained between the TDI and TDO signals of the interposers TAP204interface1604so that they all may be accessed together. FIG.20Bis provided to illustrate that the XTRG input to the programmable trigger controller may come from a multiplexer2022which inputs a Start XTRG and a Stop XTRG. The SEL output of the programmable trigger controller controls multiplexer2022to select between the Start XTRG and Stop XTRG inputs as it was described selecting the Start and Stop data and address inputs to multiplexers2004and2012. FIG.21illustrates an example implementation of programmable trigger controller2018. The programmable trigger controller includes trigger controller2102, a functional control signal multiplexer2104and a program register2106which is accessible by TAP interface1614. The trigger controller2102inputs the XTRG, ATRG, DTRG, and CC signals, the CLK signal output from multiplexer2104and programming data input2108from program register2106. The trigger controller2102outputs the SEL and Start signals of bus1616and the CE signal to counter2020. The multiplexer2104inputs functional control signals from FIO1706and signal selection control2112from program register2106. The multiplexer2104selects a desired timing signal from the functional control inputs1706and outputs it as the CLK2110signal of bus1616. The program register2106outputs selection control signals to multiplexer2104, program data input to trigger controller2102and the MENA1-N and MISEL1-N signals of bus1616. The program register is loaded by a TDI to TDO shift operation from TAP interface1614. FIG.22illustrates a detailed example implementation of trigger controller2102which includes a start condition multiplexer2202, a stop condition multiplexer2204, a start stop condition multiplexer2206and a state machine2208. Multiplexer2202has inputs for various example start conditions, including a selectable start nTRG2210where “n” can be a start XTRG, a selectable start ATRG or start DTRG, a selectable start nTRG “AND'ed” with a selectable start mTRG2212where “m” can be any start TRG other than the start nTRG, or any sequence of selectable start nTRG and start mTRG signals2216occurring separately in time. Multiplexer2202has condition select (CS) inputs coupled to program register2106via bus2108and a Start Condition output coupled to multiplexer2206. Multiplexer2204has inputs for various example stop conditions, including a selectable stop nTRG2218, a selectable stop nTRG “AND'ed” or “OR'ed” with a selectable stop mTRG2220, a count complete (CC) signal2222and a selectable stop nTRG and stop mTRG sequence2224. Multiplexer2204has condition select (CS) inputs coupled to program register2106via bus2108and a Stop Condition output coupled to multiplexer2206. In this example, the TRG ANDing function is performed by AND gates2226, the OR function is performed by OR gates2228, and TRG sequences are detected by a sequence detector (SD) state machine2230timed by CLK signal2010. Multiplexer2206has inputs for the Start Condition signal from multiplexer2202, the Stop Condition signal from multiplexer2204, a Start/Stop selection (SEL) signal from state machine2208and a start stop condition (SSC) output. State machine2208has an input coupled to the SSC output of multiplexer2206, a clock input coupled to the CLK signal2010, an enable (ENA) input coupled to program register2104via bus2108and outputs for the SEL, Start and CE signals. FIG.23illustrates an example operation diagram of state machine2208. When the ENA signal is not asserted, the state machine will be disabled in an Idle state2302. In state2302, the SEL signal is set for selecting the Start Condition. When the ENA signal is asserted, the state machine transitions to state2304where it polls for a Start Condition from multiplexer2206. When a Start Condition occurs, the state machine transitions to state2306where it; (1) sets the Start signal of bus1616, (2) sets the SEL signal for selecting the Stop Condition, (3) sets the CE signal to enable counter2020and polls for a Stop Condition from multiplexer2206. When a Stop Condition occurs, the state machine transitions to state2308where it; (1) resets the Start signal of bus1616, (2) sets the CE signal to disable the counter2020, (3) sets the SEL signal for selecting the Start Condition and (4) waits for the ENA signal to be de-asserted. When ENA is de-asserted the state machine transitions to Idle state2302. The CE signal is set in state2306to allow the counter's CC signal to be selected for providing the Stop Condition. For example, a monitoring operation may be started by any of the selectable Start Conditions input to multiplexer2202, then, after a predetermined count, the monitoring operation may be terminated by the CC output of counter2020. It should be understood that a further refinement of the operation diagram of 22 may include optionally enabling the CE signal based upon whether the counter2020is selected for providing the Stop Signal. This would eliminate the counter from consuming power when it is not used to provide the Stop Condition. As seen inFIGS.20A-20B, setting the SEL signal for a Start Condition in state2302includes setting multiplexers2004,2012and if present multiplexer2022to select the start data and start address patterns to be input to comparators2002and2010and the start XTRG to be input to the programmable trigger controller2018. Also as seen inFIGS.20A-20B, setting the SEL signal for a Stop Condition in state2306includes setting multiplexers2004,2012and if present multiplexer2022to select the stop data and stop address patterns to be input to comparators2002and2010and the stop XTRG to be input to the programmable trigger controller2018. FIG.24illustrates one example timing diagram depicting the operation of state machine2208. Initially the state machine is in state2302waiting for the ENA signal to be asserted. When the ENA signal is asserted the state machine transitions to state2304to poll for a Start Condition on the SSC output of multiplexer2206. When a Start Condition is detected the state machine transitions to state2306to poll for a Stop Condition on the SSC output of multiplexer2206. In state2306the Start, SEL and CE signals are asserted. The asserted Start signal enables a selected one or more monitors to begin a monitoring operation timed by CLK2110. The asserted CE signal enables the counter2020to begin counting operation timed by the CLK2110. The asserted SEL signal controls multiplexer2206to output a stop condition to the state machine. The SEL signal also controls multiplexers2004,2012and2022to select the stop data, address or XTRG conditions. When a Stop Condition is detected the state machine transitions to state2308to wait for the ENA signal to be de-asserted. In state2308the Start, SEL and CE signals are de-asserted. When the ENA signal is de-asserted the state machine transitions back to the Idle state2302. FIG.25illustrates an example monitor architecture2502that could be used by the disclosure. The architecture includes a parallel register2504, an auto-incrementing monitor memory2506, a serial/parallel register2508and a monitor controller2510all connected as shown. Register2504has a parallel input bus2512, a parallel output bus2514and a clock (CLK) input2516. Register2508has a serial bus connected to the TDI, CTL and TDO signals of the interposer TAP interface1614, a parallel input bus2518and a parallel output bus2520. Controller2510has inputs connected to the Start, CLK and a monitor enable (ME) signals of bus1616of programmable trigger controller2018. Controller2510has an increment 1 (INC1) output, a write (WR) output and a reset 1 (RST1) output. Memory2506has a parallel data input (DI) bus coupled to the parallel data output bus2514of register2504, a parallel data output (DO) bus coupled to the parallel data input bus2518of register2508. Memory2506has a first memory address increment input coupled to the INC1 output of controller2510, a memory write input coupled to the WR output of controller2510, a first address reset input coupled to the RST1 output of controller2510. Memory2506has a memory read (RD) input coupled to an output of bus2520and a second address reset input (RST2) coupled to an output of bus2520. Memory2506has a second memory address increment input (INC2) coupled to an output from the CTL bus of interposer TAP bus1614. In this example, and when register2508is selected for access by a TAP instruction that is used to read the contents of memory2506, the INC2 signal is asserted each time the TAP passes through the Exist1-DR state ofFIG.4. While in this example the Exit1-DR state is used to provide the INC2 signal, it should be understood that other appropriate TAP states could be used to provide the INC2 signal during memory read operations. At the beginning of a memory read operation, register2508is accessed by the TAP interface1614to toggle the RST2 signal of bus2520and to set the RD signal of bus2520to place the memory in read mode. Toggling the RST2 signal resets the memory address to a starting point from which the read operation will begin, typically address zero. After this initial setup procedure, register2508is accessed by the TAP to capture the monitor data stored at the starting point address during the Capture-DR state ofFIG.4and to shift the captured data out during the Shift-DR state ofFIG.4. The TAP then transitions through the Exit1-DR state ofFIG.4to activate the INC2 signal to increment the memory's address. The TAP then transitions to Capture-DR state, via the Update-DR and Select-DR states, to capture and shift out the data stored in the next memory address location. This capture, shift and increment address process repeats until all the contents of the memory have been read. During these TAP controlled memory read operations, the RD signal of bus2520is set to keep the memory in read mode. At the end of the read operation, the TAP resets the RD signal. FIG.26illustrate an example implementation of an auto-addressing monitor memory2506that could be used in this disclosure. The auto-addressing monitor memory consists of monitor memory2602, an address counter2604, And gate2606and Or gate2608. The memory2602has a data input (DI) for inputting parallel data2514from register2504, the WR input from controller2510, the RD input from register2508and address input from address counter2604. The memory has a data output (DO) for outputting data to the parallel input2518of register2508. The address counter has a RST input from And gate2606, a CLK input from Or gate2608and an address bus output to memory2602. And gate2606has an input for the RST1 signal from controller2510, an input for the RST2 signal from register2508and an output to provide the counter RST signal. Or gate2606has an input for the INC1 signal from the controller2510, and input for the INC2 signal from the TAP CTL bus and an output to provide the counter CLK signal. During monitor store operations, controller2510is enabled to provide the RST1, INC1 and WR signals to auto-addressing monitor memory2502. During monitor read operations, the interposer's TAP accesses register2518to provide the RST2, INC2 and RD signals to auto-addressing monitor memory2502to read out its stored contents. FIG.27illustrates an example implementation of monitor controller2510which consists of a state machine. The state machine has inputs for inputting the Start, CLK and MENA signals1616from monitor trigger unit1606and outputs for outputting the RST1, WR and INC1 signals to auto-addressing monitor memory2506and the CLK signal2516to register2504. FIG.28illustrates an example operational diagram of state machine2510. Initially the state machine will be in an Idle state2803waiting for the MENA signal to be asserted. When MENA is asserted the state machine transitions to state2804to output a RST1 to reset address counter2604to the starting address. From state2804the state machine transitions to state2806where it polls for a Start signal. When the Start signal occurs, the state machine transitions to state2808where it outputs a CLK signal2516to register2504. In response to the CLK signal, register2504stores the data present at its input2512. From state2808the state machine transitions to state2810where it outputs a WR signal to auto-addressing monitor memory2506. In response to the WR signal, auto-addressing monitor memory2506stores the data that was stored in register2504in response to the CLK signal of state2808. From state2810the state machine transitions to state2812where it outputs an INC1 signal to address counter2604to select the next memory location to be written too. If the Start signal is still asserted, the state machine transitions back to state2808to repeat the CLK, WR and INC1 state operations. If the Start signal is de-asserted, the state machine transitions to state2806to wait for either another Start signal or the MENA signal to be de-asserted. FIG.29illustrates a monitor2502wherein in the purpose is to monitor the activity of an address bus2902within an interposer1410. FIG.30illustrates a monitor2502wherein in the purpose is to monitor the activity of a data bus3002within an interposer1410. FIG.31illustrates a monitor3102wherein in the purpose is to monitor the activity of either an address bus2902or a data bus3002within an interposer1410. Monitor3102differs from monitor2502in that it includes a multiplexer3104to select the input to register to selectively come from an address bus2902or a data bus3002. A MISEL signal from monitor trigger unit1606bus1616determines whether the address bus or data bus is selected for monitoring. FIG.32illustrates a monitor3202wherein in the purpose is to monitor the activity of an analog signal within an interposer1410. The analog signal may be any type of signal such as a time varying voltage signal, such as but not limited to, a sine wave or a fixed voltage signal such as, but not limited to, a power supply voltage. Monitor3202differs from monitor2502in that it includes an analog switch (SW)3204, an analog to digital converter (ADC)3206and a monitor controller3208adapted for controlling the ADC3206as described below in regard toFIGS.33-36. Any type of ADC can be used that has an analog input and parallel digital outputs, including, but not limited to, successive approximation ADCs and Flash ADCs. The output of the analog switch3204may be directly coupled to the analog input of the ADC or an amplifier (A)3210may exist between the analog switch output and ADC input. If the amplifier in programmable, for example a programmable gain amplifier, it can receive programming (PRG) input3212by extending the length of register2508to provide the PRG input to the amplifier via bus2520. The programming (PRG) input may alternately come from a source, for example a TAP register, external of monitor3202. The analog switch receives MISEL input from bus1616of monitor trigger unit1606to select one of the switch inputs (IN1-N)3214to be output from the switch. The parallel digital outputs of the ADC are input to parallel inputs of monitor memory2506. FIG.33illustrates an example monitor controller3208which includes a state machine. The state machine differs from state machine2510ofFIG.27in that it includes an optional Done input from ADC3206. Also, depending upon the type of ADC used, the operation of the CLK output to the ADC may be different from the operation of the state machine described inFIGS.27and28. FIG.34illustrates a first example operational diagram of state machine3208. Initially the state machine will be in an Idle state3402waiting for the MENA signal to be asserted. When MENA is asserted the state machine transitions to state3404to output a RST1 to reset address counter2604to the starting address. From state3404the state machine transitions to state3406where it polls for a Start signal. When the Start signal occurs, the state machine transitions to state3408where it outputs a CLK signal to ADC3206. In response to the CLK signal, ADC3206samples its analog input, digitizes the sampled signal and outputs a parallel digital representation of the analog signal to the parallel inputs of memory2506. The ADC in this example is assumed to have a high speed internal clock that is enabled by the CLK signal to convert the sampled analog input into the parallel digital output. The analog to digital conversion is fast enough to occur before the WR signal is asserted in state3410. From state3408the state machine transitions to state3410where it outputs a WR signal to auto-addressing monitor memory2506. In response to the WR signal, auto-addressing monitor memory stores the parallel outputs of ADC3206. From state3410the state machine transitions to state3412where it outputs an INC1 signal to address counter2604to select the next memory location to be written too. If the Start signal is still asserted, the state machine transitions back to state3408to repeat the CLK, WR and INC1 state operations. If the Start signal is de-asserted, the state machine transitions to state3406to wait for either another Start signal or the MENA signal to be de-asserted. FIG.35illustrates a second example operational diagram of state machine3208. Initially the state machine will be in an Idle state3502waiting for the MENA signal to be asserted. When MENA is asserted the state machine transitions to state3504to output a RST1 to reset address counter2604to the starting address. From state3504the state machine transitions to state3506where it polls for a Start signal. When the Start signal occurs, the state machine transitions to state3508where it outputs a number (N) of CLK signals to ADC3206. In response to the CLK signals, ADC3206samples its analog input, digitizes the sampled signal and outputs a parallel digital representation of the analog signal to the parallel inputs of memory2506. The ADC in this example is assumed to operate in response to the N CLK signals of state3508to convert the sampled analog input into the parallel digital output. From state3508the state machine transitions to state3510where it outputs a WR signal to auto-addressing monitor memory2506. In response to the WR signal, auto-addressing monitor memory stores the parallel outputs of ADC3206. From state3510the state machine transitions to state3512where it outputs an INC1 signal to address counter2604to select the next memory location to be written too. If the Start signal is still asserted, the state machine transitions back to state3508to repeat the CLK, WR and INC1 state operations. If the Start signal is de-asserted, the state machine transitions to state3506to wait for either another Start signal or the MENA signal to be de-asserted. FIG.36illustrates a third example operational diagram of state machine3208. Initially the state machine will be in an Idle state3602waiting for the MENA signal to be asserted. When MENA is asserted the state machine transitions to state3604to output a RST1 to reset address counter2604to the starting address. From state3604the state machine transitions to state3606where it polls for a Start signal. When the Start signal occurs, the state machine transitions to state3608where it outputs a CLK signal to ADC3206and polls for a Done signal from the ADC3206. In response to the CLK signal, ADC3206samples its analog input, digitizes the sampled signal, outputs a parallel digital representation of the analog signal to the parallel inputs of memory2506then outputs the Done signal to the state machine3208. The ADC in this example is assumed to have an internal clock that is enabled by the CLK signal to convert the sampled analog input into the parallel digital output. The analog to digital conversion of this example is not fast enough to occur before the WR signal is asserted in state3610, therefore the state machine must remain in state3608until the Done signal is asserted. In state3610the state machine outputs a WR signal to auto-addressing monitor memory2506. In response to the WR signal, auto-addressing monitor memory stores the parallel outputs of ADC3206. From state3610the state machine transitions to state3612where it outputs an INC1 signal to address counter2604to select the next memory location to be written too. If the Start signal is still asserted, the state machine transitions back to state3608to repeat the CLK, WR and INC1 state operations. If the Start signal is de-asserted, the state machine transitions to state3606to wait for either another Start signal or the MENA signal to be de-asserted. FIG.37illustrates a fourth example operational diagram of state machine3208. Initially the state machine will be in an Idle state3702waiting for the MENA signal to be asserted. When MENA is asserted the state machine transitions to state3704to output a RST1 to reset address counter2604to the starting address. From state3704the state machine transitions to state3706where it polls for a Start signal. When the Start signal occurs, the state machine transitions to state3708where it outputs CLK signals to ADC3206and polls for a Done signal from the ADC3206. In response to the CLK signals, ADC3206samples its analog input, digitizes the sampled signal, outputs a parallel digital representation of the analog signal to the parallel inputs of memory2506then outputs the Done signal to the state machine3208. The ADC in this example is assumed to operate in response to the CLK signals output during state3708to convert the sampled analog input into the parallel digital output. When the analog to digital conversion is complete the Done signal is asserted and the state machine transitions to state3710. In state3710the CLK outputs are stopped and a WR signal is output to memory2506. In response to the WR signal, auto-addressing monitor memory stores the parallel outputs of ADC3206. From state3710the state machine transitions to state3712where it outputs an INC1 signal to address counter2604to select the next memory location to be written too. If the Start signal is still asserted, the state machine transitions back to state3708to repeat the CLK, WR and INC1 state operations. If the Start signal is de-asserted, the state machine transitions to state3706to wait for either another Start signal or the MENA signal to be de-asserted. FIG.38illustrates a monitor3802wherein in the purpose is to simultaneously monitor the activity of a pair of analog signals within an interposer1410. The analog signals may be any type of signals such as time varying voltage signals such as, but not limited to, sine wave signals or fixed voltage signals such as, but not limited to, power supply and/or ground voltages. Monitor3802differs from monitor3202in that it includes two analog switches (SW)3204, two analog to digital converters (ADC)3206and a monitor memory3804having dual parallel input ports3214, one for each parallel output of the ADCs. Any types of previously described ADCs may be used. The outputs of the analog switches3204may be directly coupled to the analog inputs of the ADCs or amplifiers may exist between the analog switch outputs and ADC inputs. If the amplifiers are programmable they can receive programming input as described inFIG.32. The analog switches receive MISEL input from bus1616to select one of their switch inputs3214to be output to the ADCs. The parallel digital outputs of the ADCs are input to parallel inputs of the dual input ports of monitor memory3804. The monitor controller3208can operate the ADCs as described inFIGS.34-37. This type of analog monitor is used when it is desired to monitor differential analog voltages. FIG.39illustrates a stacked die3902mounted on an interposer3904which is mounted on a substrate3906. The interposer provides a voltage bus (VB)3908, ground bus (GB)3910and functional interconnects, including analog signal (AS) interconnects3912and3914between the stacked die and substrate. The interposer includes the single ended analog signal monitor3202ofFIG.32. The inputs3214of analog monitor3202are connected to the VB3908, GB3910, AS3912and AS3914. When enabled by monitor trigger unit1606, monitor3202operates to sample, digitize and store the voltage levels occurring in time on a selected input, i.e. VB, GB or AS. When the monitoring operation ends, the stored digital representations of the sampled voltages can be shifted out of the monitor memory for examination, via the interposer TAP204. The single ended analog signal monitoring ofFIG.39can be triggered to start and stop during selected functional start and stop conditions detected by the monitor trigger unit1606. For example, a single ended monitoring of the voltage on the VB or GB connection can be triggered to occur over a functional stacked die operation defined by a start and stop condition or a single ended monitoring a voltage on a selected AS connection can be triggered to occur over a functional stacked die operation defined by a start and stop condition. Monitoring the VB or GB connection allows testing that the voltages on the VB or GB remain at acceptable levels during power intensive functional operations of the stacked die. Monitoring an AS connection allows testing that the analog voltage signals on the connection are operating properly and within specification during a functional operation of the stacked die. FIG.40illustrates a stacked die3902mounted on an interposer4002which is mounted on a substrate3906. The interposer provides a voltage bus (VB)3908, ground bus (GB)3910and functional interconnects, including analog signal (AS) interconnects3912and3914. The interposer includes the differential analog signal monitor3802ofFIG.38. First selectable inputs3214of analog monitor3802are connected to the VB3908at contact point4004, GB3910at contact point4008and AS3912. Second selectable inputs3214of analog monitor3802are connected to the VB3908at contact point4006, GB3910at contract point4010and AS3914. Contact point4004is the VB connection in close proximity to stacked die3902and contact point4006is the VB connection in close proximity to substrate3906. Contact point4008is the GB connection in close proximity to stacked die3902and contact point4010is the GB connection in close proximity to substrate3906. When enabled by monitor trigger unit1606, monitor3802operates to sample, digitize and store differential voltage levels selected on the first and second inputs3214. The VB voltage levels at contact points4004and4006may be selected to allow monitoring the voltage differences occurring in time between points4004and4006to determine the voltage drop on the VB bussing path3908. The GB voltage levels at contact points4008and4010may be selected to allow monitoring the voltage differences occurring in time between points4008and4010to determine the voltage drop on the GB bussing path3910. AS3912and AS3914may selected to allow monitoring the voltage differences occurring in time between AS3912and AS3914. When the differential monitoring operation ends, the stored digital representations of the sampled differential voltages can be shifted out of the monitors memory for examination, via the interposer TAP204. The differential analog signal monitoring ofFIG.40can be triggered to start and stop during selected functional start and stop conditions detected by the monitor trigger unit1606. For example, a differential monitoring of the voltage drop across the VB or GB connection can be triggered to occur over a functional stacked die operation defined by a start and stop condition or a differential monitoring of the voltages occurring on two selected AS connections can be triggered to occur over a functional stacked die operation defined by a start and stop condition. Differentially monitoring the voltage drops across the VB or GB connection allows testing that the voltage drops remain within acceptable levels during power intensive functional operations of the stacked die. Further, by knowing the resistance of the VB and GB connections, the supply and ground currents through the connections may be determined by Ohm's Law. By knowing the current through and the voltage drop across a VB or GB, power monitoring can be performed during a selected functional operation of the die stack. Differentially monitoring the voltages on two AS connections allows testing that the analog signals are operating properly and within specification during a functional operation of the stacked die. FIG.41illustrates a monitor4102wherein in the purpose is to monitor temperature sensor (TS) outputs4110. The outputs may come from any type of TS such as those mentioned in regard toFIG.18. Monitor4102is the same as monitor3202with the exception that it includes a counter4106and a modified auto-addressing monitor memory4104. The counter4106has inputs for the RST1 and INC1 signals from controller3208and temperature sensor address (TSA) outputs. The TSA outputs are input to analog switch (SW)3204in substitution of the MISEL inputs ofFIG.32. Each TSA count pattern controls SW3204to select one of the TS outputs to be input to the ADC3206. The TSA count patterns are also input to additional inputs provided on monitor memory4104to allow identifying which TS is currently being selected for a temperature measurement. When enabled, the monitor controller state machine3208operates to control the ADC3208and monitor memory4104as previously described. The monitor controller state machine3208also controls counter4106using the RST1 and INC1 signals. Depending on the type of ADC being used, the monitor controller state machine operates according to one of the operational diagrams ofFIGS.34-37. The operation of temperature sensor monitor4102is described below using the operational state diagram ofFIG.34as one example. As seen in the operational diagram ofFIG.34, state machine3208will initially be in an Idle state3402waiting for the MENA signal to be asserted. When MENA is asserted the state machine transitions to state3404to output a RST1 signal to reset the address counter2604of monitor memory4104and counter4106to starting addresses. From state3404the state machine transitions to state3406where it polls for a Start signal. When the Start signal occurs, the state machine transitions to state3408where it outputs a CLK signal to ADC3206. In response to the CLK signal, ADC3206samples the analog output of the currently addressed TS, digitizes the sampled signal and outputs a parallel digital representation of the analog signal to the parallel inputs of memory4104. From state3408the state machine transitions to state3410where it outputs a WR signal to monitor memory4104. In response to the WR signal, monitor memory4104stores the parallel outputs of ADC3206and the current TSA output from the counter4106. From state3410the state machine transitions to state3412where it outputs an INC1 signal to address counter2604of the monitor memory4104to select the next memory location to be written too and to counter4106to increment the TSA counter4106to the next count pattern to select the next TS to be measured. If the Start signal is still asserted, the state machine transitions back to state3408to repeat the CLK, WR and INC1 state operations. When the TSA counter4106reaches a maximum count it wraps around to the starting count and continues counting. If the Start signal is de-asserted, the state machine transitions to state3406to wait for either another Start signal or the MENA signal to be de-asserted. At the end of a monitoring operation, register2508is accessed by the interposer TAP, via bus1614, to read out the contents of the monitor memory locations. Each location read will contain data from a TS measurement and the address (the TSA output of counter4106) of the TS that was measured. FIG.42illustrates a stacked die4202mounted on an interposer4204which is mounted on a substrate4206. The interposer4204contains a temperature monitor4102with inputs coupled to temperature sensors (TS). As seen the TS's can exist in the interposer, the substrate, and/or in die of the die stack. When enabled and a start condition occurs, the temperature monitor cycles through the steps of addressing each TS and sampling, digitizing and storing its output. This operation continues until the start condition goes away. At the end of a temperature monitoring operation, the stored TS temperature measurements and TS addresses of each are read out of temperature monitor4102by the interposer TAP204for examination. FIG.43illustrates an example of a TAP controlled temperature monitor4302that includes a SW3204, an ADC3206, optional amplifier (A)3210and a TAP controlled register4304. Temperature monitor4302differs from the temperature monitor4102in that the interposer TAP controls the operation of monitor4302instead the trigger unit1606. SW3204has TS inputs4110, select temperature sensor (SELTS) inputs for selecting a TS for measurement and an output coupled to an input of the ADC. Register4304has SELTS outputs coupled to the SELTS inputs of SW3204, a CLK output coupled to the ADC, an optional Done input from the ADC and inputs for inputting the data output (DO) from the ADC. Register4304is coupled to the TDI, CTL and TDO signals of bus1614to allow the TAP to access register4304to control the operation of temperature monitor4302. To obtain a temperature measurement from one of the TS1-N, the TAP performs one or more scan operations to register4304to shift in and update data on the SELTS outputs to select a TS1-N for measurement and to enable a CLK to be output from register4304to start the measurement. The CLK output from register4304needs to occur after the SELTS signals have been set to select a TS for measurement. This can be achieved in different ways, including, but not limited to, the following two ways. A first way is to perform a first scan operation of register4304to update the SELTS outputs to select a TS for measurement, followed by a second scan operation of register4304to assert the CLK output to start the measurement process. A second way is to do a single scan operation to register4304that updates the SELTS outputs to select a TS for measurement and also asserts the CLK output to start the measurement process. In the second way, register4304must be adapted with circuitry that delays the assertion of the CLK output until after the SELTS outputs have set to select a TS1-N for measurement. In this example, the ADC3206is assumed to be self timed (i.e. it has an internal clock/oscillator) after receiving the CLK input from the register. The ADC may or may not include a Done output signal. If it includes a Done output signal, the TAP will repeatedly scan the register to capture and shift out the value of the Done signal and the DO from the ADC. When the Done signal is asserted, the DO values scanned out will be the TS measurement data. If the ADC does not require a Done signal, i.e. the self timed ADC operation is fast enough to occur well before the next TAP scan operation to register4304, the DO value captured and shifted out on the next scan operation will be the TS measurement data. FIG.44illustrates a stacked die4402mounted on an interposer4404which is mounted on a substrate4406. The interposer4404contains a temperature monitor4302with inputs coupled to temperature sensors (TS). As seen the TS's can exist in the interposer, the substrate, and/or in die of the die stack. When controlled by the interposer TAP, the temperature monitor4302can address one of the TS inputs and sample, digitize and shift out the temperature measurement from the TS. The advantage of the temperature monitor4303over temperature monitor4102is simplicity. The disadvantage is that the temperature monitoring cannot be synchronized to occur in response to a specific functional operation of stacked die4402, as can the temperature sensor4102ofFIG.41. While the monitor trigger unit1606and monitors1608-1612of the disclosure have been described as being used within interposers, it should be understood that the monitor trigger unit1606and monitors1608-1612could be used within a die or within an embedded core located within a die. FIG.45illustrates a singled ended TAP controlled analog signal monitor4502that can be used to sample, digitize and output analog signals. Monitor4502is the same as monitor4302with the exception that SW3204is coupled to analog signal inputs (IN-1-)3214instead of to temperature sensor outputs. Monitor4502can be used in substitution of the trigger unit controlled monitor3202ofFIG.39to measure single ended voltages on interposer VB, GB and AS signals. FIG.46illustrates a differential TAP controlled analog signal monitor4602that can be used to sample, digitize and output differential analog signals. Monitor4602is the same as monitor4502with the exception that it includes two switches (SW)3204each having inputs (IN1-N)3214for inputting analog signals, two ADCs3206and a register having parallel inputs for the data outputs (DO) of both ADCs. Monitor4602can be used in substitution of the trigger unit controlled monitor3802ofFIG.40to measure differential voltages on interposer VB, GB and AS signals. While the monitor trigger unit1616, trigger controlled monitors1608-1612and TAP controlled monitors4302,4502and4602have been described being used within interposers, it should be understood that they are not limited to only being used within interposers. As described inFIG.47below, they can also be used within die or embedded cores within die. FIG.47illustrates a die or embedded core4702which includes the monitor trigger unit1606, address & data bus monitors1612, voltage & analog signal monitors1610,4502and4602and temperature monitors1608and4302. The monitor trigger unit and monitors operate in the die or embedded core4702as they have been described operating in interposers. The monitor trigger unit and monitors are coupled to a TAP204within the die or embedded core4702via bus1614. The TAP is interfaced to external TDI, TCK, TMS and TDO signals on the die or embedded core4702. The monitor trigger unit is coupled to an address bus, a data bus and control signals located within the die or embedded core4702. Also, monitor trigger unit may be interface to an external XTRG signal4706of the die or embedded core4702. Monitor1612is coupled to an address bus and a data bus located within the die or embedded core4702. Monitors1610,4502and/or4602are coupled to a V Bus, a G Bus and analog signals located within the die or embedded core4702. Monitors1608and/or4302are coupled to temperature sensors (TS) located within the die or embedded core4702. Trigger unit controlled monitors operate in response to the monitor control bus1616as has been described. TAP controlled monitors operate in response to TAP control as has been described. FIG.48illustrates the use of an instrumentation interposer of the disclosure being used with a stack of die4804-4808that are connected to the interposer via bond wires. The instrumentation interposer operates as previously described to access and control monitoring instruments within the interposer. FIG.49illustrates a group of one or more stacked or single die4904-4908located on an instrumentation interposer4902of the disclosure. The instrumentation interposer operates as previously described to access and control monitoring instruments within the interposer. Although the disclosure has been described in detail, it should be understood that various changes, substitutions and alterations may be made without departing from the spirit and scope of the disclosure as defined by the appended claims.
49,935
11860225
DETAILED DESCRIPTION In this specification, the invention will be described in a plurality of sections or embodiments when required as a matter of convenience. However, these sections or embodiments are not irrelevant to each other unless otherwise stated, and one relates to a part or all of the other as details, a modification, or supplement. Also, in the embodiment described below, when mentioning the number of elements (including number of pieces, values, amount, range, and the like), the number of the elements is not limited to a specific number unless otherwise stated or except the case where the number is apparently limited to a specific number in principle, and the number larger or smaller than the specific number is also applicable. Furthermore, in the embodiment described below, it goes without saying that each component (including an element step) is not indispensable unless otherwise clearly specified or unless it is obvious that the component is indispensable in principle. Likewise, in the embodiment described below, when mentioning a shape, a positional relation, or the like of a component, a substantially approximate shape, a similar shape, or the like is included unless otherwise clearly specified or unless it is obvious from the context that the shape, the positional relation, or the like of the component differs in principle. The same applies to the above-described numerical value and range. Hereinafter, an embodiment will be described in detail with reference to the drawings. Note that the same members are denoted by the same reference characters in principle throughout the drawings for describing the embodiment and the repetitive description thereof will be omitted. Also, in the following embodiment, descriptions of the same or similar parts are not repeated in principle unless particularly required. In addition, in the drawings used for the embodiment, hatching may be omitted even in cross-sectional views so as to make them easy to see. Also, hatching may be applied even in plan views so as to make them easy to see. Embodiment <Semiconductor Device> First, an example of a semiconductor device according to the present embodiment will be described with reference toFIG.1.FIG.1is a cross-sectional view (side cross-sectional view) of the semiconductor device according to the present embodiment. A semiconductor device PKG of the present embodiment is a semiconductor device in a package form, that is, a semiconductor package. Specifically, as shown inFIG.1, the semiconductor device PKG includes a semiconductor chip CP, a die pad (chip mounting portion) DP on which the semiconductor chip CP is mounted, a plurality of leads (lead portions) LD formed of conductors, and a sealing portion (sealing resin portion) MR for sealing them. The sealing portion MR is made of a resin material such as a thermosetting resin material, and may contain a filler or the like. For example, the sealing portion MR can be formed by using an epoxy resin containing a filler or the like. The plurality of leads LD is composed of conductors, and is preferably made of a metal material such as copper (Cu) or a copper alloy. A part of each of the plurality of leads LD is sealed inside the sealing portion MR, and the other part thereof protrudes from side surfaces of the sealing portion MR to the outside of the sealing portion MR. Hereinafter, the portion of the lead LD located inside the sealing portion MR is referred to as an inner lead portion, and the portion of the lead LD located outside the sealing portion MR is referred to as an outer lead portion. Note that the semiconductor device PKG of the present embodiment has a structure in which a part (outer lead portion) of each lead LD protrudes from the side surfaces of the sealing portion MR and the following description will be given based on this structure. However, the present invention is not limited to this structure, and the structure in which each lead LD hardly protrudes from the side surfaces of the sealing portion MR and a part of each lead LD is exposed on a lower surface of the sealing portion MR (QFN structure) or the like can also be adopted. Each outer lead portion of the plurality of leads LD protrudes from the side surfaces of the sealing portion MR to the outside of the sealing portion MR. The outer lead portion of each lead LD is bent such that a lower surface of the outer lead portion near the end portion is located on substantially the same plane as the lower surface of the sealing portion MR. The outer lead portion of the lead LD functions as an external connection terminal (external terminal) of the semiconductor device PKG. A plating layer PL is formed on the surface of the outer lead portion of the lead LD. The plating layer PL is made of solder (solder material), for example, Sn-based solder, Sn—Bi-based solder, or Sn—Ag—Cu-based solder. Therefore, the surface of the outer lead portion of the lead LD is covered with a solder material (here, the plating layer PL). The plating layer PL can also be regarded as a part of the outer lead portion of the lead LD. The combination of the outer lead portion of each lead LD and the plating layer PL formed on the surface thereof can be regarded as an external terminal of the semiconductor device PKG. In that case, the surface of the external terminal of the semiconductor device PKG is composed of a solder material (here, a solder material constituting the plating layer PL). The die pad DP is composed of a conductor, and is preferably made of a metal material such as copper (Cu) or a copper alloy. On an upper surface of the die pad DP, the semiconductor chip CP is mounted with its front surface facing upward and its back surface facing the die pad DP. The back surface of the semiconductor chip CP is bonded and fixed to the die pad DP via an adhesive layer (bonding material) DB. The semiconductor chip CP is sealed inside the sealing portion MR and is not exposed from the sealing portion MR. The semiconductor chip CP is manufactured by forming various semiconductor elements or semiconductor integrated circuits on a main surface of a semiconductor substrate made of, for example, single crystal silicon, and then separating the semiconductor substrate into each semiconductor chip by dicing or the like. The semiconductor chip CP is electrically connected to the plurality of leads LD via a plurality of bonding wires BW. Specifically, a plurality of pad electrodes PD is formed on the surface of the semiconductor chip CP, and the plurality of pad electrodes PD is electrically connected to the plurality of leads LD via the plurality of bonding wires BW. Namely, one end of both ends of each bonding wire BW is connected to the pad electrode PD of the semiconductor chip CP, and the other end is connected to the inner lead portion of the lead LD. The bonding wire BW has conductivity, and is preferably made of a fine metal wire such as a gold (Au) wire, a copper (Cu) wire, or an aluminum (Al) wire. The bonding wire BW is sealed inside the sealing portion MR and is not exposed from the sealing portion MR. <Manufacturing Process of Semiconductor Device> A manufacturing process of a semiconductor device according to the present embodiment includes a step of preparing a test apparatus TS described later, a step of preparing the semiconductor device (object to be tested) PKG shown inFIG.1described above, and a step of performing an electrical test (test step) for the semiconductor device PKG after these steps. The step of preparing the semiconductor device PKG includes a step of mounting the semiconductor chip CP on the die pad DP of the lead frame (die bonding step) and a step of electrically connecting the plurality of pad electrodes PD of the semiconductor chip CP and (the inner lead portions of) the plurality of leads LD of the lead frame via the plurality of bonding wires BW (wire bonding step). The step of preparing the semiconductor device PKG further includes a step of sealing the semiconductor chip CP, the die pad DP, the plurality of bonding wires BW, and (the inner lead portions of) the plurality of leads LD with the sealing portion MR (molding step), a step of forming the plating layer PL on the surface of the plurality of leads LD exposed from the sealing portion MR, a step of cutting the plurality of leads LD from the lead frame, and a step of bending the plurality of leads LD. In the step of performing the electrical test for the semiconductor device PKG (test step), the electrical test is performed for the semiconductor device PKG by using the test apparatus TS described below. This step is referred to as an electrical test step or a test step. <Test Apparatus> FIG.2is an explanatory diagram schematically showing the configuration of the test apparatus (electrical test apparatus, inspection apparatus, tester) TS for performing the electrical test for the semiconductor device PKG.FIG.3toFIG.6are cross-sectional views showing the principal part of the test apparatus TS shown inFIG.2around a socket SK in an enlarged manner.FIG.3andFIG.4correspond to the state in which the semiconductor device PKG to be tested is arranged on a seat DZ of the socket SK of the test apparatus TS, andFIG.5andFIG.6correspond to the state in which the semiconductor device PKG is arranged as shown inFIG.3andFIG.4and then the semiconductor device PKG is pushed toward a probe unit UT of the test apparatus TS. Also, althoughFIG.3andFIG.4show cross-sections at different positions,FIG.3andFIG.5show cross-sections at the same position, andFIG.4andFIG.6show cross-sections at the same position. Namely,FIG.3andFIG.5show the cross-sections at the position which does not cross the probe pin PB, andFIG.4andFIG.6show the cross-sections at the position which crosses the probe pin PB. Further,FIG.7is a cross-sectional view showing the principal part around the socket SK of the test apparatus TS in an enlarged manner, and it shows the state in which the lead LD of the semiconductor device PKG and the terminal TE of the test board TB are electrically connected via the probe pin PB. Note thatFIG.7corresponds to the state in which the semiconductor device PKG is pushed toward the probe unit UT of the test apparatus TS as inFIG.5andFIG.6.FIG.8is a side view of the probe pin PB andFIG.9is a cross-sectional view of the probe pin PB. InFIG.9, the illustration of a spring SP is omitted for the sake of simplicity. As shown inFIG.2, the test apparatus TS of the present embodiment includes the socket (housing unit) SK which houses the semiconductor device PKG to be tested, the test board (wiring board, performance board) TB electrically connected to the semiconductor device PKG via the socket SK, and a test head HE electrically connected to the test board TB. A test circuit configured to input and output a signal or a test voltage from/to the semiconductor device PKG is formed in the test head HE, and the test circuit is electrically connected to the semiconductor device PKG via the test board TB and the socket SK. Also, a control unit (tester main body) CL is arranged next to the test head HE, and the control unit CL is electrically connected to the test head HE. A control circuit for controlling the electrical test is formed in the control unit CL, and the control unit CL performs, for example, the control of the relative position between the test head HE and the semiconductor device PKG or the control of the continuous testing of the plurality of semiconductor devices PKG. As another aspect, a control circuit may be formed inside the test head HE. The test board TB is a wiring board having an upper surface TBa on which the socket SK is mounted and a lower surface (back surface) TBb located on opposite side of the upper surface TBa. The test board TB is arranged and fixed on the upper surface of the test head HE such that the lower surface TBb of the test board TB faces the upper surface of the test head HE. The method of fixing the test board TB is not particularly limited. As shown inFIG.3toFIG.7, a conductor pattern (conductor layer) including a plurality of terminals (electrodes) TE is formed on the upper surface TBa of the test board TB. The plurality of terminals TE formed on the upper surface TBa of the test board TB is electrically connected to the test circuit formed in the test head HE via a wiring formed on the upper surface TBa of the test board TB (not shown), a via wiring penetrating the test board TB (not shown), and a wiring formed on the lower surface TBb of the TB (not shown). The terminal TE is a terminal for contacting the probe pin PB. The terminal TE preferably contains a gold (Au) film. When the terminal TE has a single-layer structure, the terminal TE is preferably made of gold (Au). Also, when the terminal TE has a laminated structure, the uppermost layer thereof is preferably made of gold (Au). The socket SK is mounted on the upper surface TBa of the test board TB. The socket SK has an outer frame portion (socket main body) FR and the probe unit UT. The probe unit UT is housed (arranged) in an opening of the outer frame portion FR. Therefore, the probe unit UT is surrounded by the outer frame portion FR in a plan view. Here, the plan view corresponds to the case of seeing the plane substantially parallel to the upper surface Tba of the test board TB. The outer frame portion FR of the socket SK is arranged on the upper surface TBa of the test board TB and is fixed to the test board TB. The method of fixing the outer frame portion FR is not particularly limited, but the outer frame portion FR can be fixed to the test board TB by, for example, a screw (not shown). The outer frame portion FR is mainly made of an insulating material such as a resin. The probe unit UT has a base portion BS and a plurality of probe pins (test terminals, pogo pins, contact terminals, contactors) PB housed in the base portion BS. The plurality of probe pins PB is inserted (housed) in a plurality of through holes (probe holes) TH provided in the base portion BS, respectively. The probe pin PB is provided in order to electrically connect the lead LD of the semiconductor device PKG and the terminal TE of the test board TB via the probe pin PB. The probe unit UT preferably has as many probe pins PB as the external terminals (leads LD in this case) of the semiconductor device PKG to be tested. The number of external terminals of the semiconductor device PKG is, for example, 64 to 144, but is not limited to this. The base portion BS has, for example, a plate shape, and is preferably made of an insulating material such as a resin. The base portion BS has a main surface BSa and a main surface BSb that are located on opposite side to each other, and the main surface BSb of the base portion BS faces the upper surface TBa of the test board TB in the case ofFIG.3toFIG.7. In this case, the main surface BSb of the base portion BS corresponds to the lower surface of the base portion BS, and the main surface BSa of the base portion BS corresponds to the upper surface of the base portion BS. In the case ofFIG.3toFIG.6, the outer peripheral portion of the main surface BSa of the base portion BS is pressed by the outer frame portion FR, so that the probe unit UT is held by the test board TB. The base portion BS may also be composed of a plurality of members. For example, in the case ofFIG.3toFIG.7, the base portion BS has two plate-shaped members BS1and BS2that are laminated on each other. Specifically, the base portion BS has a structure in which the plate-shaped member BS1and the plate-shaped member BS2are laminated on each other. In the case ofFIG.3toFIG.7, of the plate-shaped members BS1and BS2constituting the base portion BS, the member BS1is arranged on the upper side and the member BS2is arranged on the lower side (the side closer to the test board TB). The through hole TH in which the probe pin PB is housed penetrates the laminated members BS1and BS2. Also, the thickness of the member BS1and the thickness of the member BS2may be different from each other. However, when the thickness of the member BS1and the thickness of the member BS2are made equal to each other, the cost required for preparing the probe unit UT can be suppressed because the same member can be commonly used for the member BS1and the member BS2. The seat DZ is arranged on the probe unit UT, that is, on the base portion BS via an elastic body BN such as a spring. In the case ofFIG.3toFIG.7, the seat DZ is arranged on the main surface BSa of the base portion BS via the elastic body BN. The elastic body BN such as a spring is housed in a concave portion (recessed portion) KB1provided in the main surface BSa of the base portion BS, and a part of the elastic body BN protrudes from the main surface BSa of the base portion BS and is in contact with the lower surface of the seat DZ. The elastic body BN exerts a force for lifting the seat DZ on the seat DZ. The semiconductor device PKG is housed in the opening of the outer frame portion FR, and is mounted (arranged) on the seat DZ on the probe unit UT. Therefore, in a plan view, the semiconductor device PKG is surrounded by the outer frame portion FR and overlaps with the probe unit UT. The probe unit UT is present below the semiconductor device PKG. Further, a pressing jig (not shown) can be arranged above the socket SK (thus, above the semiconductor device PKG). By pressing the upper surface of the sealing portion MR of the semiconductor device PKG, the pressing jig can push the semiconductor device PKG toward the probe unit UT. Alternatively, by applying a pressing force to (the outer lead portions of) the plurality of leads LD of the semiconductor device PKG, the pressing jig can push the semiconductor device PKG toward the probe unit UT. FIG.3andFIG.4show the stage after the semiconductor device PKG to be tested is arranged on the seat DZ and before the semiconductor device PKG is pushed toward the probe unit UT. Meanwhile,FIG.5toFIG.7show the stage in which the semiconductor device PKG has been pushed toward the probe unit UT by the pressing jig after arranging the semiconductor device PKG on the seat DZ as shown inFIG.3andFIG.4. At the stage ofFIG.3andFIG.4, since the elastic body BN is lifting the seat DZ, the seat DZ is separated from the base portion BS, and each lead LD of the semiconductor device PKG is separated from the probe pin PB. Therefore, at the stage of FIG.3andFIG.4, each lead LD of the semiconductor device PKG is not in contact with the probe pin PB, and the lead LD and the probe pin PB located below the lead LD are not electrically connected to each other. On the other hand, at the stage ofFIG.5toFIG.7, the semiconductor device PKG is pushed toward the probe unit UT with the pressing jig, whereby the semiconductor device PKG descends together with the seat DZ and approaches the base portion BS, so that (the outer lead portion of) each lead LD of the semiconductor device PKG comes into contact with the probe pin PB located below the lead LD. Therefore, at the stage ofFIG.5toFIG.7, (the outer lead portion of) each lead LD of the semiconductor device PKG is in contact with the probe pin PB located below the lead LD, so that the lead LD and the probe pin PB located below the lead LD are electrically connected to each other. Also, at the stage ofFIG.5toFIG.7, each probe pin PB of the probe unit UT comes into contact with and is electrically connected to the terminal TE of the test board TB located below the probe pin PB. Note that it is not essential that the probe pin PB and the terminal TE are in contact (electrically connected) at the stage ofFIG.3andFIG.4, but the probe pin PB and the terminal TE are in contact with and electrically connected to each other in the case ofFIG.5. The configuration of the probe pin PB will be further described with reference toFIG.7toFIG.9. As shown inFIG.7toFIG.9, the probe pin PB includes a plunger (plunger portion) PR1, a plunger (plunger portion) PR2arranged on opposite side of the plunger PR1, and a spring (spring portion) SP as an elastic body to be arranged between the plunger PR1and the plunger PR2, and has an elongated rod-like (needle-like) shape as a whole. In the probe pin PB, the plunger PR1and the plunger PR2are arranged so as to face each other via an elastic body portion (here, the spring SP). In the case ofFIG.7andFIG.8, the spring SP is a coil spring. The plunger PR1has a tip portion ST1opposite the side facing the plunger PR2, and the plunger PR2has a tip portion ST2opposite the side facing the plunger PR1. The tip portion ST1corresponds to one tip portion of the probe pin PB, the tip portion ST2corresponds to the other tip portion of the probe pin PB, and the tip portion ST2is the tip portion located on opposite side of the tip portion ST1in the probe pin PB. The plungers PR1and PR2have conductivity and are each made of a metal material. It is preferable that the plungers PR1and PR2(in particular, the tip portions ST1and ST2) are made of the same material (same metal material). Each probe pin PB (plungers PR1and PR2and spring SP) is housed (inserted) in the through hole TH of the base portion BS of the probe unit UT. Note that the tip portion ST1of the plunger PR1constituting the probe pin PB (that is, the tip portion ST1of the probe pin PB) protrudes from the main surface BSa of the base portion BS of the probe unit UT. Further, the tip portion ST2of the plunger PR2constituting the probe pin PB (that is, the tip portion ST2of the probe pin PB) protrudes from the main surface BSb of the base portion BS of the probe unit UT. It is preferable that the amount of protrusion of the probe pin PB from the main surface BSa of the base portion BS and the amount of protrusion of the probe pin PB from the main surface BSb of the base portion BS are the same. In the present embodiment, the tip portion ST1and the tip portion ST2have the same shape in each probe pin PB. From another point of view, the tip portion ST1and the tip portion ST2have a symmetrical structure in each probe pin PB. In the case ofFIG.7toFIG.9, both the tip portion ST1and the tip portion ST2have a crown-like shape. The plunger PR1has a flange portion FG1and the plunger PR2has a flange portion FG2. In the plunger PR1, the flange portion FG1is provided on the inner side (side closer to the plunger PR2) than the tip portion ST1, and in the plunger PR2, the flange portion FG2is provided on the inner side (side closer to the plunger PR1) than the tip portion ST2. In the plunger PR1, the flange portion FG1annularly overhangs outward (that is, overhangs in a direction in which the diameter increases). Further, in the plunger PR2, the flange portion FG2annularly overhangs outward (that is, overhangs in a direction in which the diameter increases). Namely, the plunger PR1has a substantially cylindrical outer shape, but the diameter of the flange portion FG1is larger than the diameter of the part of the plunger PR1on the tip side from the flange portion FG1. Also, the plunger PR2has a substantially cylindrical outer shape, but the diameter of the flange portion FG2is larger than the diameter of the part of the plunger PR2on the tip side from the flange portion FG2. The spring SP is arranged between the flange portion FG1of the plunger PR1and the flange portion FG2of the plunger PR2. A hole PH is provided at the root of the plunger PR2(the end portion opposite the tip portion ST2), and a thin rod-shaped shaft portion AX provided at the root of the plunger PR1(the end portion opposite the tip portion ST1) is inserted in the hole PH. Namely, the root portion of the plunger PR2encloses a part of the plunger PR1(shaft portion AX). Consequently, the plunger PR1and the plunger PR2are in contact with each other, so that the plunger PR1and the plunger PR2are electrically connected to each other. Also, when the spring SP has conductivity, the plunger PR1and the plunger PR2can be electrically connected to each other by contacting the spring SP with both the plunger PR1and the plunger PR2. Therefore, in each probe pin PB, the tip portion ST1and the tip portion ST2are electrically connected to each other through a conductor. <Test Step> The test step in which the electrical test is performed for the semiconductor device PKG by using the test apparatus TS will be described. First, the test apparatus TS is prepared. The preparation of the test apparatus TS may be performed before or after the preparation of the semiconductor device PKG to be tested, or may be performed at the same time as the preparation of the semiconductor device PKG to be tested. In the test apparatus TS prepared here, as shown inFIG.3andFIG.4, the probe unit UT is arranged on the test board TB such that the main surface BSb of the base portion BS of the probe unit UT faces the upper surface TBa of the test board TB. In this case, of the plungers PR1and PR2constituting each probe pin PB, the plunger PR1is located on the upper side, and the plunger PR2is located on the lower side (side closer to the test board TB). Also, the tip portion ST1faces upward and the tip portion ST2faces downward (side closer to the test board TB) in each probe pin PB. The tip portion ST1of the probe pin PB protrudes from the main surface BSa of the base portion BS of the probe unit UT, and the tip portion ST2of the probe pin PB protrudes from the main surface BSb of the base portion BS of the probe unit UT. The tip portions ST2of the plurality of probe pins PB included in the probe unit UT face the plurality of terminals TE of the test board TB, respectively. At this stage, it is not essential that the tip portion ST2of the probe pin PB is in contact with (electrically connected to) the terminal TE of the test board TB, but the tip portion ST2of the probe pin PB is in contact with and electrically connected to the terminal TE of the test board TB in the case ofFIG.4. In the test step, first, as shown inFIG.3andFIG.4, the semiconductor device PKG to be tested is arranged on the seat DZ of the socket SK of the test apparatus TS. When the semiconductor device PKG is arranged on the seat DZ, the tip portion ST1of each probe pin PB is in a state of facing the lead LD of the semiconductor device PKG. However, since the elastic body BN is lifting the seat DZ at this stage, the seat DZ is separated from the base portion BS, and each lead LD of the semiconductor device PKG is separated from the tip portion ST1of the probe pin PB. Therefore, at the stage ofFIG.3andFIG.4, each lead LD of the semiconductor device PKG is not in contact with the probe pin PB, and each lead LD of the semiconductor device PKG is not electrically connected to the probe pin PB located below the lead LD. Note that, at the stage where the semiconductor device PKG to be tested is arranged on the seat DZ as shown inFIG.3andFIG.4, it is not essential that the tip portion ST2of the probe pin PB comes into contact with (is electrically connected to) the terminal TE of the test board TB, but the tip portion ST2of the probe pin PB is in contact with and electrically connected to the terminal TE of the test board TB in the case ofFIG.4. Here, when the semiconductor device PKG is arranged on the seat DZ of the socket SK, the semiconductor device PKG which is a semiconductor package is in a state of being housed in the socket SK. Therefore, the position on the seat DZ in the outer frame portion FR of the socket SK can be regarded as the package housing portion of the socket SK (housing portion of the semiconductor device PKG). Therefore, arranging the semiconductor device PKG on the seat DZ of the socket SK can be regarded as arranging the semiconductor device PKG in the package housing portion of the socket SK. Then, as shown inFIG.5toFIG.7, by pushing the semiconductor device PKG toward the probe unit UT with the pressing jig (not shown) or the like, the semiconductor device PKG descends together with the seat DZ and approaches the base portion BS, so that the outer lead portion of each lead LD of the semiconductor device PKG comes into contact with and is electrically connected to the tip portion ST1of the probe pin PB located below the lead LD. Further, when the semiconductor device PKG is pushed toward the probe unit UT by the pressing jig (not shown) or the like as shown inFIG.5toFIG.7, the tip portion ST2of each probe pin PB comes into contact with and is electrically connected to the terminal TE of the test board TB located below the probe pin PB. Therefore, when the semiconductor device PKG is pushed toward the probe unit UT, as shown inFIG.6andFIG.7, the tip portion ST1of each probe pin PB is brought into contact with the lead LD (more specifically, the plating layer PL on the surface of the lead LD) of the semiconductor device PKG, and the tip portion ST2of each probe pin PB is brought into contact with the terminal TE of the test board TB. As a result, the lead LD of the semiconductor device PKG and the terminal TE of the test board TB are electrically connected via the probe pin PB, and the plurality of leads LD of the semiconductor device PKG is electrically connected to the test circuit formed in the test head HE via the plurality of probe pins PB and the conductor portion (including the terminal TE) of the test board TB. The tip portion ST1of the probe pin PB has a sharp portion (in the case ofFIG.7, it has a plurality of sharp portions), and this sharp portion bites into the lead LD (more specifically, the plating layer PL on the surface of the lead LD), so that the contact resistance between the lead LD and the probe pin PB can be reduced. In this state (state inFIG.5toFIG.7), a current or voltage is supplied from the test circuit formed in the test head HE to the semiconductor chip CP of the semiconductor device PKG via the test board TB, the probe pin PB, and the lead LD, whereby the electrical test of the semiconductor device PKG can be performed. For example, by measuring the electrical characteristics of the semiconductor device PKG, the quality of the electrical characteristics of the semiconductor device PKG is tested. The probe pin PB is used as a transmission path for transmitting the current or voltage input from the terminal TE of the test board TB to the lead LD of the semiconductor device PKG. Thereafter, the pressing force applied to the semiconductor device PKG by the pressing jig or the like is released, and the semiconductor device PKG for which the electrical test has been completed is taken out from (the package housing portion of) the socket SK. Then, after the semiconductor device PKG to be tested next is arranged on the seat DZ of the socket SK as shown inFIG.3andFIG.4, the semiconductor device PKG is pushed toward the probe unit UT as shown inFIG.5toFIG.7, and the electrical test of the semiconductor device PKG is performed. By repeating this, the electrical test can be sequentially performed for a plurality of semiconductor devices PKG. When the number of semiconductor devices PKG for which the electrical test has been performed using the test apparatus TS increases, the tip portion ST1of the probe pin PB that has been repeatedly brought into contact with the lead LD is worn away. If the wear amount of the tip portion ST1of the probe pin PB increases, there is a fear that the contact resistance between the tip portion ST1of the probe pin PB and the lead LD may increase, and the increase in the contact resistance may reduce the reliability of the electrical test of the semiconductor device. In the present embodiment, when the tip portion ST1of the probe pin PB is worn away due to repeated contact with the lead LD, the probe unit UT including a plurality of probe pins PB can be turned upside down. Hereinafter, the case where the probe unit UT including the plurality of probe pins PB is turned upside down will be described with reference toFIG.10toFIG.16.FIG.10toFIG.16are cross-sectional views showing the principal part around the socket SK of the test apparatus TS in an enlarged manner.FIG.10,FIG.12, andFIG.14show cross-sections at the position corresponding toFIG.3andFIG.5, andFIG.11,FIG.13, andFIG.15show cross-sections at the position corresponding toFIG.4andFIG.6.FIG.16shows a cross-section at the position corresponding toFIG.7. After performing the electrical test of the semiconductor device PKG by using the test apparatus TS as shown inFIG.3toFIG.7above, the probe unit UT is removed in the test apparatus TS, and then the probe unit UT is rearranged so that the main surface BSa of the base portion BS faces the upper surface TBa of the test board TB as shown inFIG.10andFIG.11. At this time, since the probe unit UT is rearranged by turning the probe unit UT upside down, the probe unit UT is housed in the opening of the outer frame portion FR and is arranged on the upper surface TBa of the test board TB, with the main surface BSa of the base portion BS facing the upper surface TBa of the test board TB as shown inFIG.10andFIG.11. Namely, the main surface BSa of the base portion BS becomes the lower surface of the base portion BS, and the main surface BSb of the base portion BS becomes the upper surface of the base portion BS. Further, of the plate-shaped members BS1and BS2constituting the base portion BS, the member BS2is arranged on the upper side and the member BS1is arranged on the lower side (the side closer to the test board TB). The probe unit UT is held or fixed to the test board TB by pressing the outer peripheral portion of the main surface BSb of the base portion BS with the outer frame portion FR. Note that it is more preferable to perform cleaning treatment of the plurality of probe pins PB included in the probe unit UT after removing the probe unit UT and before rearranging the probe unit UT. The seat DZ is arranged on the rearranged probe unit UT, that is, on the main surface BSb of the base portion BS via the elastic body BN such as a spring. The elastic body BN such as a spring is housed in a concave portion (recessed portion) KB2provided in the main surface BSb of the base portion BS, and a part of the elastic body BN protrudes from the main surface BSb of the base portion BS and comes into contact with the lower surface of the seat DZ. The elastic body BN exerts a force for lifting the seat DZ on the seat DZ. The position of the concave portion KB1in the main surface BSa of the base portion BS when arranging the probe unit UT such that the main surface BSb faces the upper surface TBa of the test board TB as shown inFIG.3toFIG.7and the position of the concave portion KB2in the main surface BSb of the base portion BS when arranging the probe unit UT such that the main surface BSa faces the upper surface TBa of the test board TB as shown inFIG.10toFIG.16are preferably the same. Namely, it is preferable that the position of the concave portion KB1in the main surface BSa of the base portion BS and the position of the concave portion KB2in the main surface BSb of the base portion BS become the same position when the base portion BS is turned upside down. Consequently, the relative positions of the elastic body BN with respect to the base portion BS can be made the same between the case ofFIG.3toFIG.7and the case ofFIG.10toFIG.16. Further, it is preferable that the shape and depth of the concave portion KB1and the shape and depth of the concave portion KB2are the same as each other. Consequently, the shape and dimensions of the elastic body BN to be used can be made common between the case ofFIG.3toFIG.7and the case ofFIG.10toFIG.16, and it is also possible to use the common elastic body BN between the case ofFIG.3toFIG.7and the case ofFIG.10toFIG.16. When the probe unit UT is turned upside down, the plurality of probe pins PB constituting the probe unit UT is also turned upside down together with the base portion BS constituting the probe unit UT. Therefore, as shown inFIG.10andFIG.11, of the plungers PR1and PR2constituting each probe pin PB, the plunger PR2is located on the upper side and the plunger PR1is located on the lower side (side closer to the test board TB), and the tip portion ST2faces upward and the tip portion ST1faces downward (side closer to the test board TB) in each probe pin PB. The tip portion ST1of the probe pin PB protrudes from the main surface BSa of the base portion BS of the probe unit UT, and the tip portion ST2of the probe pin PB protrudes from the main surface BSb of the base portion BS of the probe unit UT. The tip portions ST1of the plurality of probe pins PB included in the probe unit UT face the plurality of terminals TE of the test board TB, respectively. At this stage, it is not essential that the tip portion ST1of the probe pin PB is in contact with (electrically connected to) the terminal TE of the test board TB, but the tip portion ST1of the probe pin PB is in contact with and electrically connected to the terminal TE of the test board TB in the case ofFIG.11. With the use of the test apparatus TS in which the probe unit UT has been rearranged in this way, the test step can be performed as follows. First, as shown inFIG.12andFIG.13, the semiconductor device PKG to be tested is arranged on the seat DZ of the socket SK of the test apparatus TS. Namely, the semiconductor device PKG is arranged in the package housing portion of the socket SK. When the semiconductor device PKG is arranged on the seat DZ, the tip portion ST2of each probe pin PB is in a state of facing the lead LD of the semiconductor device PKG. However, since the elastic body BN is lifting the seat DZ at this stage, the seat DZ is separated from the base portion BS, and each lead LD of the semiconductor device PKG is separated from the tip portion ST2of the probe pin PB. Therefore, at the stage ofFIG.12andFIG.13, each lead LD of the semiconductor device PKG is not in contact with the probe pin PB, and each lead LD of the semiconductor device PKG is not electrically connected to the probe pin PB located below the lead LD. Note that, at the stage where the semiconductor device PKG to be tested is arranged on the seat DZ as shown inFIG.12andFIG.13, it is not essential that the tip portion ST1of the probe pin PB comes into contact with (is electrically connected to) the terminal TE of the test board TB, but the tip portion ST1of the probe pin PB is in contact with and electrically connected to the terminal TE of the test board TB in the case ofFIG.13. Then, as shown inFIG.14toFIG.16, by pushing the semiconductor device PKG toward the probe unit UT with the pressing jig (not shown) or the like, the semiconductor device PKG descends together with the seat DZ and approaches the base portion BS, so that the outer lead portion of each lead LD of the semiconductor device PKG comes into contact with and is electrically connected to the tip portion ST2of the probe pin PB located below the lead LD. Further, when the semiconductor device PKG is pushed toward the probe unit UT by the pressing jig (not shown) or the like as shown inFIG.14toFIG.16, the tip portion ST1of each probe pin PB comes into contact with and is electrically connected to the terminal TE of the test board TB located below the probe pin PB. Therefore, when the semiconductor device PKG is pushed toward the probe unit UT, as shown inFIG.15andFIG.16, the tip portion ST2of each probe pin PB is brought into contact with the lead LD (more specifically, the plating layer PL on the surface of the lead LD) of the semiconductor device PKG, and the tip portion ST1of each probe pin PB is brought into contact with the terminal TE of the test board TB. As a result, the lead LD of the semiconductor device PKG and the terminal TE of the test board TB are electrically connected via the probe pin PB, and the plurality of leads LD of the semiconductor device PKG is electrically connected to the test circuit formed in the test head HE via the plurality of probe pins PB and the test board TB. The tip portion ST2of the probe pin PB has a sharp portion (in the case ofFIG.16, it has a plurality of sharp portions), and this sharp portion bites into the lead LD (more specifically, the plating layer PL on the surface of the lead LD), so that the contact resistance between the lead LD and the probe pin PB can be reduced. In this state (state inFIG.14toFIG.16), a current or voltage is supplied from the test circuit formed in the test head HE to the semiconductor chip CP of the semiconductor device PKG via the test board TB, the probe pin PB, and the lead LD, whereby the electrical test of the semiconductor device PKG can be performed. For example, by measuring the electrical characteristics of the semiconductor device PKG, the quality of the electrical characteristics of the semiconductor device PKG is tested. Thereafter, the pressing force applied to the semiconductor device PKG by the pressing jig or the like is released, and the semiconductor device PKG for which the electrical test has been completed is taken out from (the package housing portion of) the socket SK. Then, after the semiconductor device PKG to be tested next is arranged on the seat DZ of the socket SK as shown in FIG.12andFIG.13, the semiconductor device PKG is pushed toward the probe unit UT as shown inFIG.14toFIG.16, and the electrical test of the semiconductor device PKG is performed. By repeating this, the electrical test can be sequentially performed for a plurality of semiconductor devices PKG. <Studied Example> Next, a test apparatus according to a studied example studied by the inventors will be described with reference toFIG.17.FIG.17is a cross-sectional view showing the principal part of the test apparatus TS101according to the studied example studied by the inventors, and it shows the cross-section at the position corresponding toFIG.7. In the test apparatus TS101according to the studied example shown inFIG.17, a probe pin PB101is housed in a through hole TH101of a base portion BS101constituting a probe unit UT101. Also, a terminal TE101on an upper surface of a test board TB101and a lead LD101of a semiconductor device PKG101are electrically connected via the probe pin PB101. The probe pin PB101has a spring SP101and plungers PR101and PR102arranged so as to face each other via the spring SP101. A tip portion ST101of the plunger PR101, that is, one tip portion ST101of the probe pin PB101protrudes from an upper surface of the base portion BS101and is brought into contact with the lead LD101(more specifically, a plating layer PL101on the surface of the lead LD101). Further, a tip portion ST102of the plunger PR102, that is, the other tip portion ST102of the probe pin PB101protrudes from a lower surface of the base portion BS101and is brought into contact with the terminal TE101on the upper surface of the test board TB101. Consequently, the lead LD101is electrically connected to the terminal TE101of the test board TB via the probe pin PB101. A current or voltage is supplied from the test circuit to the semiconductor device PKG101via the test board TB101and the probe pin PB101, whereby the electrical test of the semiconductor device PKG101is performed. In the test apparatus TS101of the studied example, the shape of the tip portion ST101of the probe pin PB101and the shape of the tip portion ST102of the probe pin PB101are different from each other. In the case ofFIG.17, the shape of the tip portion ST101of the probe pin PB101is a crown-like shape, and the shape of the tip portion ST102of the probe pin PB101is a conical shape (needle-like shape). The electrical test of the semiconductor device PKG101can be performed using the probe pin PB101provided in the probe unit UT101, but when the number of semiconductor devices PKG101for which the electrical test has been performed increases, the tip portion ST101of the probe pin PB101that has been repeatedly brought into contact with the lead LD101is worn away. If the wear amount of the tip portion ST101of the probe pin PB101increases, there is a fear that the contact resistance between the tip portion ST101of the probe pin PB101and the lead LD101may increase, and the increase in the contact resistance may reduce the reliability of the electrical test of the semiconductor device. Therefore, after the electrical test of a predetermined number of semiconductor devices PKG101has been performed using the test apparatus TS101, it is necessary to replace the probe pin PB101included in the probe unit UT101of the test apparatus TS101with a new probe pin PB101. However, this will increase the manufacturing cost of the semiconductor device. <Main Feature and Effect> The probe pin PB of the present embodiment is a probe pin used for performing an electrical test of a semiconductor device. By electrically connecting the external terminal (here, the lead LD) of the semiconductor device PKG to the terminal TE of the test board TB via the probe pin PB, the electrical test of the semiconductor device PKG can be performed. One of the main features of the present embodiment is that the probe pin PB has the tip portion ST1and the tip portion ST2which is located on opposite side of the tip portion ST1and which has the same shape as the tip portion ST1. As described above, in the case where the tip portion ST1of the probe pin PB is brought into contact with the external terminal (here, the lead LD) of the semiconductor device to perform the electrical test, the tip portion ST1of the probe pin PB that has been repeatedly brought into contact with the external terminal of the semiconductor device is worn away when the number of semiconductor devices for which the electrical test has been performed increases. If the wear amount of the tip portion ST1of the probe pin PB increases, there is a fear that the contact resistance between the tip portion ST1of the probe pin PB and the external terminal of the semiconductor device may increase, and the increase in the contact resistance may reduce the reliability of the electrical test of the semiconductor device. Therefore, in the present embodiment, when the tip portion ST1of the probe pin PB has been worn away by repeatedly contacting the tip portion ST1of the probe pin PB with the external terminal of the semiconductor device, the probe unit UT including the plurality of probe pins PB is turned upside down, and the tip portion ST2of the probe pin PB is brought into contact with the external terminal of the semiconductor device to perform the electrical test of the semiconductor device. Consequently, the life of the probe pin PB according to the present embodiment capable of performing the electrical test by contacting the tip portion ST1of the probe pin PB with the external terminal of the semiconductor device and the electrical test by contacting the tip portion ST2of the probe pin PB with the external terminal of the semiconductor device is longer than the life of the probe pin PB101according to the studied example described above. By extending the life of the probe pin PB, the manufacturing cost of the semiconductor device can be suppressed. Note that the life of the probe pin corresponds to the number of possible electrical tests, and the long life corresponds to the large number of possible electrical tests. Here, in the studied example shown inFIG.17above, it is also conceivable to turn the probe unit UT101including the probe pin PB101upside down when the tip portion ST101of the probe pin PB101is worn away by repeatedly contacting the tip portion ST101of the probe pin PB101with the external terminal of the semiconductor device. In this case, the electrical test of the semiconductor device is performed by contacting the tip portion ST102of the probe pin PB101with the external terminal of the semiconductor device and contacting the tip portion ST101of the probe pin PB101with the terminal TE101on the upper surface of the test board TB101. However, in the case of the studied example shown inFIG.17, the shape of the tip portion ST101of the probe pin PB101and the shape of the tip portion ST102of the probe pin PB101are different from each other. Specifically, in the case ofFIG.17, the shape of the tip portion ST101of the probe pin PB101is a crown-like shape, and the shape of the tip portion ST102of the probe pin PB101is a conical shape. Consequently, the connection state of the probe pin PB101and the external terminal of the semiconductor device tends to vary between the case in which the tip portion ST101of the probe pin PB101is brought into contact with the external terminal (here, the lead LD101) of the semiconductor device and the case in which the tip portion ST102of the probe pin PB101is brought into contact with the external terminal (here, the lead LD101) of the semiconductor device. Therefore, the connection state of the probe pin PB101and the external terminal of the semiconductor device tends to vary between the electrical test in which the tip portion ST101of the probe pin PB101is brought into contact with the external terminal of the semiconductor device and the electrical test in which the tip portion ST102of the probe pin PB101is brought into contact with the external terminal of the semiconductor device. This makes it difficult to stably perform the electrical test of the semiconductor device, and reduces the reliability of the electrical test. On the other hand, in the present embodiment, the tip portion ST1and the tip portion ST2of the probe pin PB have the same shape as each other. From another point of view, the tip portion ST1and the tip portion ST2of the probe pin PB have a symmetrical structure. For example, as shown inFIG.8toFIG.10, when the tip portion ST1of the probe pin PB has a crown-like shape, the tip portion ST2of the probe pin PB also has a crown-like shape. Further, when the tip portion ST1of the probe pin PB has a plurality of protrusions, the tip portion ST2of the probe pin PB also has a plurality of protrusions, and the number and shape of the plurality of protrusions of the tip portion ST2of the probe pin PB are the same as the number and shape of the plurality of protrusions of the tip portion ST1of the probe pin PB. As a result, the connection state of the probe pin PB and the external terminal of the semiconductor device can be made almost the same between the case in which the tip portion ST1of the probe pin PB is brought into contact with the external terminal of the semiconductor device and the case in which the tip portion ST2of the probe pin PB is brought into contact with the external terminal of the semiconductor device. Therefore, the connection state of the probe pin PB and the external terminal of the semiconductor device can be made almost the same between the electrical test in which the tip portion ST1of the probe pin PB is brought into contact with the external terminal of the semiconductor device and the electrical test in which the tip portion ST2of the probe pin PB is brought into contact with the external terminal of the semiconductor device. Therefore, it is possible to stably perform the electrical test of the semiconductor device, and to increase the reliability of the electrical test. Further, in the present embodiment, the probe unit UT including the plurality of probe pins PB is used, and the plurality of probe pins PB is turned upside down by turning the probe unit UT upside down. Therefore, it is possible to easily turn the plurality of probe pins PB upside down, and it is possible to easily make the transition from the electrical test in which the tip portion ST1of the probe pin PB is brought into contact with the external terminal of the semiconductor device to the electrical test in which the tip portion ST2of the probe pin PB is brought into contact with the external terminal of the semiconductor device. Further, since the plurality of probe pins PB included in the probe unit UT is also turned upside down by turning the probe unit UT upside down, the relative positional relationship of the plurality of probe pins PB in the probe unit UT does not change before and after the probe unit UT is turned upside down. Therefore, when the probe unit UT is turned upside down, it is possible to easily align the plurality of probe pins PB with respect to the plurality of terminals TE of the test board TB, and the positions of the plurality of probe pins PB and the positions of the external terminals of the semiconductor device to be tested can be matched easily. Here, the electrical test in which the tip portion ST1of the probe pin PB is brought into contact with the external terminal of the semiconductor device and the tip portion ST2of the probe pin PB is brought into contact with the terminal TE of the test board TB is referred to as the first electric test. Also, the electrical test in which the tip portion ST2of the probe pin PB is brought into contact with the external terminal of the semiconductor device and the tip portion ST1of the probe pin PB is brought into contact with the terminal TE of the test board TB is referred to as the second electrical test.FIG.3toFIG.7correspond to the case where the first electrical test is performed, andFIG.10toFIG.16correspond to the case where the second electrical test is performed. First, the case in which the first electrical test is performed will be described. In the first electrical test, in the probe pin PB, the tip portion ST1brought into contact with the external terminal of the semiconductor device is more likely to wear than the tip portion ST2brought into contact with the terminal TE of the test board TB. This is because there is a possibility that a coating film such as an oxide film (a film that inhibits conduction) is formed on the outermost surface of the external terminal of the semiconductor device, and the tip portion ST1of the probe pin PB needs to penetrate the film, so that the tip portion ST1of the probe pin PB is likely to be worn away. Further, the solder material constituting the external terminal of the semiconductor device may adhere to the tip portion ST1of the probe pin PB in some cases, and the tip portion ST1of the probe pin PB must be cleaned in such a case by performing the cleaning treatment for the tip portion ST1of the probe pin PB. However, there is a risk that the tip portion ST1of the probe pin PB will be worn during the cleaning treatment. When the external terminal is the lead LD mentioned above, the solder material constituting the external terminal corresponds to the plating layer PL, and when the external terminal is the solder ball or the solder bump, the solder material constituting the external terminal corresponds to the solder material constituting the solder ball or the solder bump. Further, since it is desirable that the external terminal of the semiconductor device (here, the lead LD) is formed of a metal material suitable for the external terminal of the semiconductor device, it is difficult to select the metal material capable of suppressing the wear of the tip portion ST1of the probe pin PB as the metal material for the external terminal of the semiconductor device. Therefore, when the first electrical test is performed, it is difficult to suppress the wear of the tip portion ST1of the probe pin PB due to the contact with the external terminal of the semiconductor device. On the other hand, when the first electrical test is performed, the wear of the tip portion ST2of the probe pin PB due to the contact with the terminal TE of the test board TB can be easily suppressed. This is because the terminal TE of the test board TB is used for the electrical test, but is not used when the semiconductor device is used, and thus, the metal material capable of suppressing the wear of the tip portion of the probe pin PB to be in contact with the terminal TE can be easily selected as the external terminal TE of the test board TB. For example, gold (Au) can be preferably used as the material of the terminal TE of the test board TB. As a result, the connection resistance between the terminal TE of the test board TB and the probe pin PB can be reduced, and the wear of the tip portion ST2of the probe pin PB due to the contact with the terminal TE of the test board TB can be suppressed. Further, since the terminal TE of the test board TB does not contain the solder material, the solder material does not adhere to the tip portion ST2of the probe pin PB in contact with the terminal TE of the test board TB, and therefore, it is not necessary to apply the cleaning treatment associated with the adhesion of the solder material to the tip portion ST2of the probe pin PB, and it is possible to avoid the concern that the tip portion ST2of the probe pin PB is worn by the cleaning treatment. From this point of view, it is easy to suppress the wear of the tip portion ST2of the probe pin PB due to the contact with the terminal TE of the test board TB. Further, when testing a plurality of semiconductor devices, the contact of the tip portion ST1of the probe pin PB to the external terminal of the semiconductor device is repeated, but the tip portion ST2of the probe pin PB can be kept in contact with the terminal TE of the test board TB at all times. Therefore, the wear of the tip portion ST2of the probe pin PB can be easily suppressed as compared with the tip portion ST1of the probe pin PB that is repeatedly contacted with the external terminal of the semiconductor device. Therefore, in the first electrical test performed by contacting the tip portion ST1of the probe pin PB with the external terminal of the semiconductor device and contacting the tip portion ST2of the probe pin PB with the terminal TE of the test board TB, the tip portion ST1of the probe pin PB which is brought into contact with the external terminal of the semiconductor device is more likely to be worn away than the tip portion ST2of the probe pin PB which is brought into contact with the terminal TE of the test board TB. Therefore, as the number of semiconductor devices for which the first electrical test has been performed increases, the tip portion ST1of the probe pin PB which has been repeatedly contacted with the external terminal of the semiconductor device is considerably worn away. By comparison, the wear of the tip portion ST2of the probe pin PB is suppressed to some extent. Namely, the wear amount of the tip portion ST1of the probe pin PB is larger than the wear amount of the tip portion ST2of the probe pin PB. When the wear amount of the tip portion ST1of the probe pin PB becomes large in the case where a coating film such as an oxide film is formed on the surface of the external terminal of the semiconductor device, it becomes difficult for the tip portion ST1of the probe pin PB to penetrate the film, and the connection resistance between the external terminal of the semiconductor device and the tip portion ST1of the probe pin PB becomes large. This leads to the decrease in reliability of the electrical test of the semiconductor device. Therefore, after the first electrical test in which the tip portion ST1of the plunger PR1is brought into contact with the external terminal (lead LD) of the semiconductor device is performed for a certain number of semiconductor devices, the probe unit UT including the plurality of probe pins PB is turned upside down to make a transition to the second electrical test. In this case, the transition from the electrical test performed by contacting the tip portion ST1of the probe pin PB having a large wear amount with the external terminal of the semiconductor device to the electrical test performed by contacting the tip portion ST2of the probe pin PB having a smaller wear amount than the tip portion ST1with the external terminal of the semiconductor device is made. Since the electrical test of the semiconductor device is performed by contacting the tip portion ST2of the probe pin PB having a small wear amount with the external terminal of the semiconductor device, the life of the probe pin PB is extended, and the number of the electrical tests of the semiconductor device that can be performed without replacing the probe pin PB can be increased. Therefore, the manufacturing cost of the semiconductor device can be suppressed. On the other hand, in the electrical test performed by contacting the tip portion ST2of the probe pin PB having a small wear amount with the external terminal of the semiconductor device, the tip portion ST1of the probe pin PB having a large wear amount comes into contact with the terminal TE of the test board TB. As described above, when the tip portion ST1of the probe pin PB having a large wear amount is brought into contact with the external terminal of the semiconductor device, there is a concern that the connection resistance between the tip portion ST1of the probe pin PB and the external terminal of the semiconductor device increases. However, when the tip portion ST1of the probe pin PB having a large wear amount is brought into contact with the terminal TE of the test board TB, there is less concern that the connection resistance between the tip portion ST1of the probe pin PB and the terminal TE of the test board TB increases. The reason is as follows. That is, when the tip portion ST1of the probe pin PB having a large wear amount is brought into contact with the external terminal of the semiconductor device, the tip portion ST1of the probe pin PB cannot penetrate the coating film (oxide film, etc.) on the surface of the external terminal, and there is a concern that the connection resistance between the external terminal of the semiconductor device and the tip portion ST1of the probe pin PB increases. On the other hand, a coating film such as an oxide film is unlikely to be formed on the surface of the terminal TE of the test board TB. Further, when testing a plurality of semiconductor devices, the tip portion ST1of the probe pin PB can be kept in contact with the terminal TE of the test board TB at all times. Therefore, when the tip portion ST1of the probe pin PB is brought into contact with the terminal TE of the test board TB, the tip portion ST1of the probe pin PB does not need to penetrate the coating film on the surface of the terminal TE, and thus the connection resistance between the terminal TE of the test board TB and the tip portion ST1of the probe pin PB can be suppressed even when the wear amount of the tip portion ST1of the probe pin PB is large. Therefore, in the second electrical test performed by contacting the tip portion ST2of the probe pin PB with the external terminal of the semiconductor device and contacting the tip portion ST1of the probe pin PB with the terminal TE of the test board TB, the problem hardly occurs even when the wear amount of the tip portion ST1of the probe pin PB is large. Therefore, in the present embodiment, after the first electrical test in which the tip portion ST1of the probe pin PB is brought into contact with the external terminal of the semiconductor device, the second electrical test in which the tip portion ST2of the probe pin PB is brought into contact with the external terminal of the semiconductor device is performed, whereby the tip portion of the probe pin PB having a small wear amount can be brought into contact with the external terminal of the semiconductor device. Consequently, the life of the probe pin PB is extended, and the number of the electrical tests of the semiconductor device that can be performed without replacing the probe pin PB can be increased. Therefore, the manufacturing cost of the semiconductor device can be suppressed. Further, in the present embodiment, the tip portion ST1and the tip portion ST2of the probe pin PB have the same shape. In the case ofFIG.7toFIG.9mentioned above, both the tip portion ST1and the tip portion ST2of the probe pin PB have a crown-like shape. As another aspect, the tip portion ST1and the tip portion ST2of the probe pin PB may have a shape other than the crown-like shape, but even in that case, the tip portion ST1and the tip portion ST2of the probe pin PB have the same shape.FIG.18is a side view showing a modification of the probe pin PB, and corresponds toFIG.8mentioned above. In the case ofFIG.18, the shape of both the tip portion ST1and the tip portion ST2of the probe pin PB is the conical shape (needle shape). However, when the tip portions ST1and ST2of the probe pin PB have a conical shape, the external terminal of the semiconductor device (or the terminal of the test board) and the tip portion of the probe pin PB come into contact at one point. Meanwhile, when the tip portions ST1and ST2of the probe pin PB have a crown-like shape, the external terminal of the semiconductor device (or the terminal of the test board) and the tip portion of the probe pin PB come into contact at multiple points. Therefore, considering the connectivity between the external terminal of the semiconductor device (or the terminal of the test board) and the probe pin PB, it is more preferable that the tip portions ST1and ST2of the probe pin PB have a crown-like shape rather than a conical shape. Further, in the present embodiment, the case where the lead is applied as the external terminal of the semiconductor device with which the probe pin PB is brought into contact has been described as an example. As another aspect, a ball electrode (bump electrode) such as a solder ball (solder bump) can be applied as the external terminal of the semiconductor device with which the probe pin PB is brought into contact, in addition to the lead. Therefore, the semiconductor device to be tested may be a BGA (Ball Grid Array) semiconductor package or the like. The invention made by the inventors has been specifically described above based on the embodiment thereof, but it is needless to say that the present invention is not limited to the embodiment described above and can be variously modified within the range not departing from the gist thereof.
66,098
11860226
DETAILED DESCRIPTION The technical solutions in embodiments of the present application are described below clearly and completely with reference to the drawings in the embodiments of the present application. Apparently, the described embodiments are merely part rather than all of the embodiments of the present application. All other embodiments obtained by those of ordinary skill in the art based on the embodiments of the present application without creative efforts should fall within the protection scope of the present application. The terms “first”, “second”, “third”, “fourth”, and so on (if any) in the specification, claims and the accompanying drawings of the present application are intended to distinguish between similar objects but do not necessarily indicate a specific order or sequence. It should be understood that the data used in such a way may be exchanged under proper conditions to make it possible to implement the described embodiments of the present application in other sequences apart from those illustrated or described here. Moreover, the terms “include”, “contain”, and any other variants mean to cover the non-exclusive inclusion, for example, a process, method, system, product, or device that includes a list of steps or units is not necessarily limited to those steps or units which are clearly listed, but may include other steps or units which are not expressly listed or inherent to such a process, method, system, product, or device. Prior to the formal introduction to embodiments of the present application, the application scenario of the present application as well as the problems in the prior art are first explained with reference to the accompanying drawings. FIG.1is a schematic diagram of the application scenario of the present application. In order to detect electronic components, chips, integrated circuits, and other devices, in some technologies, a plurality of devices under test (DUTs for short) can be placed at a plurality of test locations of a daisy chain test platform20at the same time. For example, DUT1-DUT8plotted as examples inFIG.1are placed at test locations W1-W8provided by the test platform20, respectively. Afterwards, the signal source10inputs the test signal via a test signal input interface W0on the test platform20, and given that the plurality of test locations on the test platform20are all connected to the interface W0, the DUT disposed at each test location can receive the same test signal. At this time, the batch detection of devices can be realized by detecting whether the plurality of DUTs receive test signals, allowing for a higher detection efficiency. Further, a plurality of test locations can be set on the test platform, and the distance between each of the test location and the signal source varies, such that when the signal source10sends a test signal to the test platform20at time TO in an example as shown inFIG.1, a certain delay exists between the time when the test signal is actually received by the DUT disposed at each test location and time TO. This time delay indicates the time loss caused by the transmission of the test signal across the transmission path on the test platform20. The farther the test location on the test platform20is from the signal source10, the greater the time delay of the test signal received by the DUT. For example, DUT1and DUT5disposed on at test locations W1and W5closest to the signal source10will receive the test signal at time T1after T0; and similarly, the DUT4and DUT8disposed on at test locations W4and W8, which are farthest from the signal source10, will not receive the test signal until time T4after T0. In some embodiments, since each test location on the test platform is fixed, after the time delays T1-T4generated when the test signal is received at each of test locations are determined, time offset can be conducted via the offset device30regarding such time delay caused by the transmission distance. For example, in the example shown inFIG.1, the offset device30may be connected to the signal source10and the test platform20, and can control the DUT disposed at each test location on the test platform20. The offset device30may be a computer, a server or chip or other electronic device capable of performing related data processing and control functions. Once the time delays T1-T4generated at each test location on the test platform20are determined, the offset device30can control each DUT on the test platform20. When the signal source10sends a test signal at time T0, the offset device30controls the DUT1disposed at the test location W1to receive the test signal at time T1, and the DUT2disposed at the test location W2to receive the test signal at time T2, and so on. In some embodiments,FIG.2is a schematic diagram of the test signal received by the DUTs at different test locations on the test platform according to an embodiment. Upon time offset by the offset device30, when the signal source sends out the test signal L0at time TO, all DUTs on the test platform20can accurately receive the test signals L1-L4with the same waveform as L0, thus ensuring the smooth completion of the subsequent test. However, in actual tests, even if time offset is performed for a DUT on the test platform20by the offset device30, some cases in which the DUT cannot accurately receive the test signal may still occur occasionally. Observation shows that, on the transmission path of the test signal from the signal source10to the corresponding DUT on the test platform20, in addition to the time loss caused by the transmission path, there still exists a time delay caused by the impedance matching of DUTs on the test platform20. The impedance matching of the DUT at each test location varies, where the time delay is caused by the impedance matching between impedance of the DUT at the test location and impedance on the signal transmission line at the test location (1) and/or between impedance of the DUT at the test location and impedance of another DUT at other test locations (2), which will lead to a certain time delay in the transmission of the test signal. As a result, the DUT cannot receive the test signal accurately even at the time offset by the offset device30, which affects the accuracy of the test platform for testing DUTs and other devices. For example,FIG.3is a schematic diagram of the test signal received by the DUTs at different test locations on the test platform according to another embodiment. When the signal source sends the test signal L0at time TO, even after the time offset carried out by the offset device30, a certain time delay (Propagation Delay, referred to as Tpd2) still exists in waveform L2′ received by the DUT2and DUT6at the test locations W2and W6, as compared with ideal waveform L2shown inFIG.2. Similarly, the waveforms received by the DUT3and DUT7at the test locations W3and W7have a time delay Tpd3, and the waveforms received by DUT4and DUT8at the test locations W4and W8have a time delay Tpd4. Moreover, given that the impedance at each test location varies, and the impedance on the transmission path between the signal source and each test location is different, the impedance matching at each test location varies, resulting in different time delay caused by impedance matching at each test location. At the same time, the impedance at the test locations with different distances from the signal source varies on the transmission path, and the impedance at the test locations with the same distance from the signal source is regarded as the same. Consequently, the time delay caused by the matching impedance of the test locations with the same distance from the signal source is the same. Finally, as shown inFIG.3, the time delay caused by the impedance matching between the impedance of the DUT itself and the impedance on the propagation path and the impedance of the DUT at other locations will stop the DUT from accurately receiving the test signal, which in turn affects the subsequent test results against the DUT. Therefore, the present application provides a time offset method and device for a test signal, which can implement time offset for the time delay caused by the impedance matching in each DUT on the test platform, so as to overcome the technical problem of time delay caused by the impedance matching to the test signal, which in turn improves the accuracy of the test platform for testing DUTs and other devices. The technical solution of the present application will be described in detail below with reference to specific embodiments. The following specific embodiments may be combined with each other, and the same or similar concepts or processes may not be repeatedly described in some examples. FIG.4is a flowchart of a time offset method for a test signal according to an embodiment of the present application. The method shown inFIG.4may be applied to the scenario shown inFIG.1, and is implemented by the offset device30, where the method includes: S101: When the offset device30sends the test signal from the signal source10to the DUTs at a plurality of test locations on the test platform20, the offset device30acquires the time parameters of each DUT. Specifically, when offsetting the impedance on a transmission path of test signals, the offset device30first needs to determine the time delay Tpd caused by impedance matching at each test location. In order to obtain an accurate time delay Tpd caused by impedance matching at each test location, and to reduce the influence of differences between different chips at different test locations on the detection results of time delay Tpd, methods adoptable in some embodiments of the present application are to place the same DUT at different locations on the test platform, and then collect the time delay Tpd caused by the corresponding impedance matching at different locations at the time when the DUT receives test signals at the different test locations. For example,FIG.5is a schematic diagram of a test state according to the present application. As shown in this figure, a DUT0is selected and sequentially placed at each of test locations W1-W8on the test platform20, and then the time parameters of the DUT0at each test location are collected. In some embodiments, the location of the DUT0may be adjusted by using the offset device30to control a slide rail, a mechanical arm, etc., to place the DUT0at different test locations according to the test requirements, and then the time parameters can be obtained. Alternatively, an operator can manually place the DUT0at different test locations, and once the offset device30determines the location of the DUT0, the time parameters of the DUT0at that location can be acquired. In some embodiments, in consideration of the effect of impedance matching caused by the impedance of DUTs at other test locations, when a DUT0is placed at one test location, e.g., at the test location W1inFIG.5, DUTs can also be set at other test locations W2-W8, thereby ensuring the symmetry and stability of signals on the test platform, and obtaining the time parameters in a way closer to the real test environment. Since the parameters of DUTs at other test locations are not obtained for the time being, the DUTs disposed at other test locations inFIG.5are represented by dotted lines. In the example shown inFIG.5, a DUT0is placed at the test location W1on a test platform20. At this moment, a signal source10can send a test signal to an input interface W0of the test platform20, and at the same time, the signal source10also sends the DUT0separately a TCK signal which may be determined in advance as shown inFIG.2. The connection relationship where the signal source10sends a TCK signal to the DUT0disposed at the test location W1is not shown in this figure. The DUT0can receive the TCK signal and the test signal at the same time, that is, the test signal can be received according to the TCK signal, for example, a receiving action is triggered at the rising edge or falling edge of the TCK signal. Furthermore, when the DUT0receives the test signal, an important parameter is the TIS (TIS for short) at which the DUT0receives the test signal, where, when the DUT receives the test signal under the trigger of the TCK signal, the test signal should reach the DUT before the trigger of the TCK signal, such that the test signal received is stable when the DUT is triggered by the TCK signal and begins to receive the test signal, which is equivalent to the operation of providing a certain time (hence the name “TIS”) for the test signal to become stable in advance. If the time difference between the test signal and the TCK signal is less than the TIS, the test signal received by the DUT under the trigger of the TCK signal is not stable yet, which will cause the failure of the DUT to receive the test signal. Therefore, upon detection of time delay Tpd caused by impedance matching at the test location, the time delay Tpd can be quantified according to the TIS at which the DUT placed at the test location receives the test signal, and based thereupon, subsequent time offset is carried out. In some embodiments, SHMOO testing can be conducted on DUTs disposed at each test location, thus acquiring time characteristic diagrams of the TIS at each test location. Each time characteristic diagram is uniformly distributed to indicate whether a DUT at a test location can accurately receive the test signal under the influence of impedance matching when receiving the test signal. Additionally, the time characteristic diagrams can be used as the time parameters acquired in S101. Exemplarily,FIG.6is a schematic diagram of time parameters according to an embodiment of the present application, which is a state schematic diagram illustrating that the DUT0disposed at the test location W1in the scenario as shown inFIG.5receives test signals with different TCK cycles and TISs, respectively. The time parameters shown inFIG.6generally include a correspondence of the TCK signals with a plurality of frequencies, the TIS of a plurality of test signals and identification information, where the identification information can be understood as the color of each small box on a matrix composed of all the small boxes inFIG.6. More specifically, the time parameters include a plurality of sub-parameters that are distributed in the form of a matrix. With each small box in the figure representing a sub-parameter, all these sub-parameters forming time parameters featuring a plurality of rows and columns as a whole. The horizontal coordinate X of the matrix composed of a plurality of sub-parameters denotes the cycle of TCK signals, which changes according to a certain first preset rule, and the vertical coordinate Y denotes the TIS, which changes according to a certain second preset rule. Each sub-parameter can be understood as a three-dimensional array, including three parameters, namely, the frequency of the TCK signal, the TIS, and the state of sub-information. The state of each sub-information is indicative of whether the DUT can accurately receive a test signal with the TIS and the TCK signal. If yes, then the state in the sub-information is in the first state A (the lighter color of the small box inFIG.6indicates that the state of sub-information corresponding to the small box is in the first state A). If no, then the state in the sub-information is in the second state B (the darker color of the small box inFIG.6indicates that the state of sub-information corresponding to the small box is in the second state B). The process of acquiring the time parameters as shown inFIG.6is described below with reference toFIG.7.FIG.7is a schematic diagram illustrating the process of acquiring time parameters according to an embodiment of the present application, where in order to acquire the time parameters of the DUT0disposed at the test location W1in the scenario shown inFIG.5, when the signal source10sends the test platform20test signals and TCK signals with one frequency, the offset device30sequentially adjusts TIS according to a second present rule, and controls the DUT0to receive the test signal with a plurality of TIS and the current TCK signal, respectively. Finally, based on whether the DUT0can successfully receive the test signal according to a TIS and a TCK signal, whether the identification information corresponding to the TIS and TCK is a first state A or a second state B is determined. For example, as shown inFIG.6, sub-information a, sub-information b and sub-information c corresponding to a TCK signal on the same x-coordinate are taken as examples. To obtain the sub-information a, when the signal source10sends the test signal and the TCK signal to the test platform20, assuming that the TCK signal triggers the DUT0at the test location W1to receive the test signal at time T11, at the moment, the test signal has started being sent at time T10before T11. Assuming that the minimum TIS required for the DUT0to receive the test signal is T20, which is later than T10, when the DUT0starts to receive the test signal at time T11under the trigger of the TCK signal, the test signal has already reached a stable state. Therefore, in the sub-information a, the DUT0can accurately receive the test signal according to the TCK signal, and thus the identification information of the sub-information corresponding to the TCK signal and TIS=(T10-T11) inFIG.6is denoted as the first state A. Subsequently, in order to obtain the sub-information b, when the signal source10sends the test signal and the TCK signal to the test platform20, assuming that the TIS is shortened to T20-T11, and the relative location relationship between the TCK signal and the test signal is obtained as shown inFIG.7. In order to obtain this location relationship, the period of the test signal can be moved backward or the cycle of the TCK signals forward. These TCK signals and test signals should be understood as the same signals, arranged at different times on a regular basis. When the TCK signal triggers the DUT0at the test location W1to receive test signals at time T11, the test signal has already started being sent at time T20before T11, such that when the DUT0starts to receive the test signal at time T11under the trigger of the TCK signal, the test signal has already reached a stable state. Therefore, the identification information of the sub-information b corresponding to the TCK signal and TIS=(T20-T11) inFIG.6is denoted as the first state A. Similarly, when the DUT0receives the test signal at time T11under the trigger of the TCK signal, the test signal just starts to be sent at T30and do not have an enough time to reach a stable state, and therefore, regarding the sub-information c acquired in this process, the DUT0cannot accurately receive the test signal based on the TCK signal. Therefore, the identification information of the sub-information c corresponding to the TCK signal and TIS=(T30-T10) inFIG.6is denoted as the second state B. It is understandable that after the DUT0at the test location W1receives all the sub-information of the test signal in one column inFIG.6according to the TCK signal and at different TIS, the frequency of the TCK is then adjusted according to the first preset rule, the above process is repeated with the new TCK signal, and after all the sub-information in a plurality of columns inFIG.6is calculated, the time parameters generated when the DUT0at the test location W1receives the test signal throughoutFIG.6are achieved. The DUT0is then moved from the test location W1to another test location, and subject to SHMOO testing the same as that carried out at the W1location, so as to obtain time parameters of the DUT at each test location at the time of receiving the test signal. For example,FIG.8is a schematic diagram illustrating another test state according to the present application. In this figure, a scenario where the DUT0is disposed at the test location W4of the test platform is shown.FIG.9is a schematic diagram of time parameters according to another embodiment of the present application, which, as shown in the scenario ofFIG.8, illustrates the state of the DUT0disposed at the test location W4when such DUT receives test signals with different TCK signals and at different TIS, respectively. The specific process of acquiring time parameters can refer toFIG.10.FIG.10is a schematic diagram illustrating the process of acquiring time parameters according to another embodiment of the present application. See descriptions inFIGS.5-7for the detailed process of acquiring the time parameters of the DUT0at the test location W4shown inFIG.9in embodiments ofFIGS.8-10. Implementations and principles as described inFIGS.8-10are the same as those ofFIGS.5-7, which will not be repeated herein. Finally, the DUT0is placed at each of test locations W1-W8on the test platform20in turn, thus acquiring the time parameters at each test location. For example,FIG.11is a schematic diagram of time parameters according to yet another embodiment of the present application, which illustrates the time parameters of each DUT0disposed at each of test locations W1-W8locations obtained in S101. Given that time parameters are related to the distance from a test location to the signal source, the time parameters at W1-W4and W5-W8correspond one to one and are regarded as the same. For this reason,FIG.11only shows the time parameters at W1-W4as an example. S102: The offset device30determines the offset parameters corresponding to the test locations of the DUTs on the test platform20according to the target time parameters and the time parameters of each DUT. When the time parameters at each test location are determined, the time parameters at other test locations can be offset according to target time parameters, where the target time parameters of the target DUT can be acquired in advance or can be preset. Given that the test locations W1and W5are closest to the signal source, impedance on the transmission path causes minimum or even negligible time delay on the test signal. Therefore, target time parameters can be time parameters, corresponding to the test locations W1and W5which are closest to the signal source, on the test platform20. In this embodiment, the target time parameters are the time parameters of the DUT0at the test location W1as shown inFIG.6as an example, then the time parameters at the test locations W2-W4and W6-W8can be compared by reference to the target time parameter at the test location W1, thereby obtaining the corresponding offset parameters at each of test locations W2-W4and W6-W8. Specifically, for example, by adjusting the time parameters at the test location W4shown inFIG.9with time parameters shown inFIG.5as the target parameters, then based on time parameters shown inFIG.9, a plurality of critical TISs corresponding to TCK signals with each frequency, e.g., a plurality of critical TISs of the TCK signals with frequencies corresponding to sub-information d-f shown inFIG.9are taken as the TIS corresponding to the sub-information e. Assuming the period of the TCK signals is fixed, when the test signal is received at a TIS larger than the sub-information e, the sub-information above the sub-information e all corresponds to the first state, and when the test signal is received by a TIS smaller than the sub-information e, the sub-information below the sub-information e all corresponds to the second state. In the same way, a plurality of critical TIS corresponding to all the TCK signals inFIG.9can be obtained. Then, when the critical TIS corresponding to the same TCK signals in the target time parameters inFIG.5is subtracted from the critical TIS of all TCK signals inFIG.9, the difference of the critical TIS corresponding to a plurality of TCK signals is obtained, which is the offset parameter corresponding to the time parameters at the test location W4. Finally, the offset device30can, according to the same method described above, determine the offset parameters of the time parameters at each test location on the test platform20. Then in S103, the offset device30sends the offset parameters determined in S102to the signal source10, such that time offset is carried out for each TCK signal according to the offset parameters when the signal source10subsequently sends the TCK signal to the DUT on the test platform20, thus ensuring that the DUT for receiving the TCK signals can accurately receive the test signal. For example,FIG.12is a schematic diagram illustrating offset for time parameters according to the present application. This figure gives a waveform schematic plot indicating that a DUT receives test signals when sub-information b corresponds to one TCK signal and a critical TIS=T20-T11in the time parameters shown inFIG.5. It also gives a waveform schematic plot indicating that corresponding DUT receives test signals when sub-information e corresponds to the same TCK signal and critical TIS=T40-T11in the time parameters shown inFIG.9. After difference is made between the critical TIS of the two time parameters in S102, it can be concluded that the offset parameter of the TCK signal in the time parameters inFIG.9is T40-T20. When the offset device30sends the offset parameter to the signal source10, the signal source40can move forward the TCK signal sent to the DUT by time T40-T20when sending the test signal subsequently to the DUT at the test location W4. At this time, the waveform received by the DUT is shown in the sub-information e′ ofFIG.12, and the interval between the T40and trigger time T11′ is greater than the minimum TIS, such that the identification information corresponding to the sub-information e is switched from the second state B to the first state A of the identification information corresponding to the sub-information e′. According to the same method mentioned above, after time parameters calculated at the test locations W2-W4are offset, respectively, W2′-W4′ at which offset time parameters are shown inFIG.11can be obtained. It is observed that in the modified time parameters, the number of sub-information corresponding to the first state is increased. The time parameters at test locations W6-W8have the same state as that of test parameters at W2-W4before and after modification, which will not be repeated herein. In some embodiments, the offset for time parameters may be specifically implemented by modifying the time parameter LINE1in the configuration file TPD OFFSET as shown inFIG.13. For example,FIG.13is a schematic diagram illustrating a time offset method according to the present application. The time offset method includes modifying the time parameter362of the TCK signal output by a pin F802of a DUT2disposed at the test location W2to time parameter392, modifying the time parameter724of the TCK signal output by a pin F803of the DUT3disposed at the test location W3to time parameter784, and modifying the time parameter1086of the TCK signal output by a pin F804of a DUT4disposed at the test location W4to time parameter1176, and so on. It should be noted that the file shown inFIG.13is only exemplary, which is intended to illustrate an implementation of time offset of the present application, rather than to define a time offset method, parameters or numerical values. Therefore, according to the time offset method for a test signal provided in embodiments of the present application, when a signal source sends test signals to a DUT on a test platform, the offset device can determine the time delay brought to the DUT at the upper side of each test location by impedance matching of test signals at each test location, and conduct time offset for TCK signals sent by the signal source to different DUTs according to the time delay, thus resolving the technical problem of time delay brought to the test signal by the impedance matching. In this way, the DUT can receive the test signal more accurately, which in turn improves the accuracy of the test platform for testing a device such as a DUT. In some embodiments, the time parameters corresponding to each test location can be offset as described above, while in other embodiments, test locations where offset is needed on the test platform can be identified first, and then time parameters at the identified part of the test locations are offset, which contributes to reduced amount of invalid calculation and improved efficiency. For example, a standard DUT can be separately placed at a plurality of test locations on a test platform, and when a signal source sends a test signal and a TCK signal to the standard DUT respectively, whether the standard DUT can accurately receive the test signal according to the preset TIS and the received TCK signal at each test location is determined. Afterwards, test locations of a plurality of test locations at which the standard DUT cannot accurately receive any test signal are taken as locations for offset, and then TCK signals generated at such location are subject to time offset. For test positions of a plurality of test locations at which the standard DUT can accurately receive the test signal, no time offset is performed. In some embodiments of the present application, in order to acquire corresponding time parameters at different test locations, a same DUT is placed at different test locations. In other embodiments, when it is ensured that the impedance of all DUTs is the same or approximately equivalent, then different DUTs can be used and disposed at different test locations on the test platform, so as to acquire the time parameters of a plurality of DUTs at each test location at the same time. The aforementioned embodiments introduce the time offset method for a test signal provided by embodiments of the present application. In order to realize the functions in the method provided by embodiments of the present application, the offset device as an implementation body may include a hardware structure and/or software module to realize the above functions in the form of the hardware structure, the software module, or the hardware structure in combination with the software module. One of the above functions is carried out in the form of a hardware structure, a software module, or a hardware structure in combination with a software module, based on particular applications and design constraint conditions of the technical solutions. For example,FIG.14is a schematic structural diagram of an offset device for test signals according to an embodiment of the present application, which illustrates a possible implementation of the offset device30, the offset device30including an acquisition module301, a processing module302and an offset module303. Specifically, an acquisition module301is configured to acquire time parameters of each of DUTs when a signal source sends a test signal to the DUTs at a plurality of test locations on a test platform, where impedance on a transmission path between the signal source and each of the test locations differs from one another, and the time parameters are used for indicating whether the impedance on each transmission path has an impact on receiving the test signal by the DUT; a processing module302is configured to determine, based on target time parameters and the time parameter of each of the DUTs, offset parameters corresponding to the plurality of test locations where the DUTs are located; and an offset module303is configured to send the signal source the plurality of offset parameters corresponding to the plurality of test locations such that the signal source performs, according to the plurality of offset parameters, time offset for TCK signals sent to the plurality of test locations. In some embodiments, target time parameters are used for indicating whether impedance on a transmission path from a signal source to a target test location has an impact on receiving test signals by a target DUT disposed at the target test location; where the target test location is the test location closest to the signal source on a test platform. In some embodiments, time parameters include a correspondence of TCK signals with a plurality of frequencies, a plurality of TISs of the test signal and identification information; where the identification information is used for indicating whether DUTs, when receiving test signals at the plurality of TISs respectively, can accurately receive the test signal according TCK signals with the plurality of frequencies, respectively. In some embodiments, TCK signals with a plurality of frequencies change according to a first preset rule; and a plurality of TISs change according to a second preset rule. In some embodiments, identification information includes a plurality of pieces of sub-information, each of which being indicative of whether DUTs can accurately receive the test signal when receiving a TCK signal with one of the plurality of frequencies at one of the plurality of TISs; the sub-information in a first state indicates that the DUTs can accurately receive the test signal at one TIS, according to the TCK signal with one frequency; and the sub-information in a second state indicates that the DUTs cannot accurately receive the test signal at one TIS, according to the TCK signal with one frequency. In some embodiments, the acquisition module301, when determining the time parameters of a first DUT of the DUTs, is specifically configured to: determine that a signal source sends the test signal and a first TCK signal of the TCK signals with a plurality of frequencies to the first DUT; based on a second preset rule, sequentially set a TIS of the first DUT to the plurality of TISs; control the first DUT to receive the test signal with the first TCK signal at a first TIS of the plurality of TISs; and determine, based on whether the first DUT successfully receives the test signal, identification information, corresponding to the first TIS and the first TCK signal, in the first time parameters. In some embodiments, a processing module302, when determining first offset parameters of a first DUT of a plurality of DUTs, is specifically configured to: determine in first time parameters a plurality of critical TISs corresponding to each TCK signal with the corresponding frequency; where when the first DUT receives the test signal with the TCK signal with one frequency, sub-information corresponding to a TIS greater than a critical TIS is in the first state, and sub-information corresponding to a TIS less than the critical TIS is in the second state; and obtain first offset parameters based on the difference between a plurality of critical TISs corresponding to each TCK signal with the corresponding frequency in the target time parameters and the plurality of critical TISs of each TCK signal with the corresponding frequency in first time parameters. In some embodiments, an acquisition module301is further configured to acquire target time parameters of a target DUT on a test platform. In some embodiments, an acquisition module301is further configured to send, via the signal source, the test signal and TCK signals to standard DUTs when the standard DUTs are located at the plurality of test locations on test platform; and control the standard DUT at each test location to receive the test signal with the TCK signal at a preset TIS; and a processing module302is further configured to determine, from the plurality of test locations, a test location at which the standard DUT cannot accurately receive the test signal as a location to be offset, and conduct time for the TCK signal at the location to be offset. It should be noted that it is understandable that the modules of the foregoing device are divided merely in terms of logical functions, which, in actual implementation, can not only be integrated on a physical entity in whole or in part, but also be divided in a physical level. These modules can be implemented in the form of software called through processing components in whole or part, or in the form of hardware in whole or part. A module can be implemented by an independent processing element, or a chip integrated on the foregoing device. In addition, it can also be stored in a memory of the foregoing device in the form of program codes. One of the processing elements of the foregoing device calls and executes the functions of the above determining module. Other modules are implemented in a similar way. In addition, these modules can be integrated in whole or in part, or implemented independently. The processing element described herein may be an integrated circuit capable of processing signals. During the implementation, each step of the foregoing method or each of the foregoing modules may be performed through an integrated logic circuit as hardware in a processor element or through instructions as software. For example, the foregoing modules may be one or more integrated circuits configured to implement the above method, such as one or more application specific integrated circuits (ASIC), or one or more digital signal processors (DSP), or one or more field programmable gate arrays (FPGA), etc. For another example, when one of the above modules is implemented in the form of calling a program code using a processing element, the processing element may be an all-purpose processor, such as a central processing unit (CPU) or other processor that can call the program code. For another example, these modules can be integrated and implemented in the form of a system-on-chip (SOC). The foregoing embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When software is used for implementation, the implementation can be performed in a form of a computer program product in whole or in part. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the procedures or functions according to the embodiments of the present application are achieved in whole or in part. The computer may be a general-purpose computer, a dedicated computer, a computer network, or another programmable device. The computer instructions may be stored in a computer-readable storage medium or may be sent from a computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be sent from a website, computer, server, or data center to another website, computer, server, or data center in a wired (for example, a coaxial cable, an optical fiber, or a Digital Subscriber Line (DSL)) or wireless (for example, infrared, radio, and microwave) manner. The computer-readable storage medium may be any usable medium accessible by a computer, or a data storage device, such as a server or a data center, integrating one or more usable media. The usable medium may be a magnetic medium (such as a floppy disk, a hard disk, or a magnetic tape), an optical medium (such as a DVD), a semiconductor medium (such as a solid state disk (SSD)), or the like. An embodiment of the present disclosure provides a time offset device for a test signal. Referring toFIG.15, the apparatus for testing semiconductor devices400may be provided as a terminal device. The time offset device400for a test signal may include a processor401, and one or more processors may be set as required. The time offset device400for a test signal may further include a memory402configured to store an executable instruction, such as an application program, of the processor401. One or more memories may be set as required. The memory may store one or more application programs. The processor401is configured to execute the instruction to perform the foregoing method. In an embodiment, a non-transitory computer-readable storage medium including instructions is provided. Referring toFIG.15, for example, the non-transitory computer-readable storage medium may be the memory402including instructions. The foregoing instructions may be executed by the processor401of the time offset device400to complete the foregoing method. For example, the non-transitory computer-readable storage medium may be a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, or the like. Embodiments of the present application also provide a chip for running instructions, which is configured to execute a time offset method for a test signal implemented by the offset device according to any one of foregoing embodiments of the present application. Embodiments of the present application also provide a program product including a computer program stored in a storage medium, and at least one processor can read the computer program from the storage medium. The at least one processor, when executing the computer program, can achieve a time offset method for a test signal executed by an electronic device according to any one of foregoing embodiments of the present application. Those of ordinary skill in the art can understand that all or some of the steps in the foregoing method embodiments may be implemented by a program instructing relevant hardware. The program may be stored in a computer readable storage medium. When the program runs, the steps of the method embodiments are performed. The foregoing storage medium includes: any medium that can store program code, such as a ROM, a RAM, a magnetic disk, or an optical disc. Finally, it should be noted that the above embodiments are merely used to explain the technical solutions of the present application, but are not intended to limit the present application. Although the present application is described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that they can still modify the technical solutions described in the foregoing embodiments, or make equivalent substitutions on some or all technical features therein. These modifications or substitutions do not make the essence of the corresponding technical solutions deviate from the spirit and scope of the technical solutions of the embodiments of the present application.
42,244
11860227
DETAILED DESCRIPTION Aspects of the present disclosure relate to machine learning delay estimation for emulation systems. The compilation workflow to compile a DUT for emulation can be split into phases including partitioning, placement and routing, and compiling. One or more of these phases can be timing driven. For example, the placement and routing of partitioned FPGAs can be determined based on the timing (e.g., delays) of signals from one register to another. The timing of the signals can be measured after the FPGAs are compiled, and the final emulation frequency performance of the system can be determined when the delays associated with signals communicated in the compiled DUT are available. However, this creates a cross-dependency where delays are needed before they are available. To solve this cross-dependency, the delay can be estimated. A delay estimation system described herein uses machine learning to predict combinatorial path delay and provide timing guidance during the compilation workflow phases. The delay estimation system receives logic blocks of the DUT and a combinatorial path connecting one or more of the logic blocks. For example, after the partitioning phase performed by a compiler, the delay estimation system may receive a combinatorial path connecting two or more logic blocks as partitioned by a compiler across one or more FPGAs. The system applies a delay model to a feature vector representing the combinatorial path, where the delay model can determine a delay of the combinatorial path. The features of the feature vector may be orthogonal to one another, a value of one feature not dependent on the value of another feature. The delay model may be a machine learning model. The system generates a timing graph using the determined delay and provides the timing graph to a compiler to perform compilation workflow phases (e.g., placement and routing of the DUT). This machine learning approach allows for an increased accuracy with which a delay of a combinational path within a DUT is estimated, an increased speed at which a DUT is emulated due to compiler partitioning and P&R that are both improved as the accuracy of delays increases (i.e., decreasing the processing cycles needed by an emulator when emulating the compiled the DUT), and a reduced consumption of processing resources to estimate a delay of a combinational path within the DUT. FIG.1illustrates a block diagram of a process100for compiling a DUT, according to one embodiment. The process100may include both frontend and backend compilation workflows for compiling a DUT. At least a portion of the compilation workflows may be performed by a delay estimation system described herein or a host system (e.g., a compiler of a host system as shown inFIG.9). The delay estimation system may be a component of the host system. The delay estimation system is further described in the description ofFIG.3. The backend compilation workflow may be split into three phases. In a first backend phase, a user design is split into multiple subsets of netlists, where each netlist can be mapped and fit into the size of the target FPGA. As the user design, which is also referred to herein as a “design under test” or “DUT,” is split across subsets of netlists, the DUT is partitioned across various FPGAs. The first backend phase may be a “partitioning” phase. The first backend phase may be timing-driven (e.g., estimated delays of combinatorial paths of the DUT are used to determine how the DUT is partitioned across FPGAs). A user design may be at least a portion of a DUT. The first backend phase may be performed by a timing-driven partitioning system of a compiler. The timing-driven partitioning system may receive user and timing constraints, hardware and firmware configurations, and the result of the frontend processing phase that is generated using register-transfer level (RTL) files of a user design (e.g., netlists of a DUT). The timing-driven partitioning system may receive delay estimates within a timing graph to determine how the DUT is partitioned across FPGAs. The partitioned DUT is used in a second backend phase. In a second backend phase, each subset of netlists are placed to a specific physical FPGA location and connections are routed among the FPGAs. The second backend phase may be a “place and route” (P&R) phase. The second backend phase may be timing-driven (e.g., estimated delays of combinatorial paths of the DUT are used in the P&R among the FPGAs). The second backend phase may follow the first backend phase and precede a third backend phase. The second backend phase may be performed by a timing-driven system P&R system of a compiler. The timing-driven system P&R system may receive the partitioned DUT from the first backend phase and delay estimates within a timing graph to determine how FPGAs are placed and routed amongst each other. In a third backend phase, the partitioned subsets of netlists are sent to a compiler, which compiles the FPGAs (e.g., performing P&R within each of the FPGAs). Additionally, socket logic introduced by the timing-driven system P&R system may be provided to the compiler. The third backend phase may be an “FPGA compile” phase. The third backend phase may also be timing-driven (e.g., estimated delays of combinatorial paths of the DUT are used in the FPGA-level P&R). In some embodiments, after FPGA P&R in the third backend phase is completed, a global timing analysis of the compiled FPGAs may be performed and measured delays of combinatorial paths within the FPGAs may be transmitted to a global database. In some embodiments of the three-phase backend workflow, the timing graph is generated using the measured delays obtained after the FPGA P&R in the third backend phase is completed. This, however, may create a cross-dependency where the first and second phases cannot use delays in their time-driven operations because the delays are unavailable until the end of the third phase. In some embodiments, to solve the cross-dependency, a fixed delay estimate (e.g., a conservative, fixed delay) or a logic-level-count-based predictor can be used. These solutions, however, may estimate the true delay with low accuracy. In turn, this may mislead backend systems of a compiler to optimize incorrect combinatorial paths of the DUT. For improved accuracy, a delay model may be used to estimate delays using data that is obtained at the first and/or second backend phases. The delay model may be a machine-learning model. The delay model can estimate combinatorial path delay with increased accuracy and improve timing guidance for backend systems of a compiler because the delay model accounts for data specific to the DUT of which delays are estimated. In this way, a delay estimation system implementing the delay model is not limited by the cross-dependency described above, and can perform timing-driven partitioning and P&R before the third backend phase is performed. During partitioning in the first backend phase, the global netlist of the DUT may be split into multiple FPGA-sized clusters, the global timing graph is also spread across different sub-partitions. After partitioning in the first backend phase, the timing nodes that form the global timing graph can be split into different FPGAs. Each timing node may represent a timing path, or timing, arc corresponding to a combinatorial path of the DUT. A timing path may be divided across multiple FPGAs and accordingly, may be divided into multiple timing paths. Examples of paths that are divided across multiple FPGAs are depicted inFIG.2. The delay of a timing path, both across FPGAs and within an FPGA, can range from a few nanoseconds to hundreds of nanoseconds. The delay may depend on factors such as FPGA size, netlist hierarchy, and FPGA fill rates. An internal FPGA delay is valuable to timing-driven partitioning and/or P&R systems to better optimize performance and size of a DUT through emulation. By providing a more accurate delay estimate at early backend phases, the delay estimation system allows a compiler to focus on optimizing true critical paths of a DUT rather than incorrectly flagged critical paths whose delays are not as large as the true critical paths' delays. Thus, the delay estimation system may improve DUT emulation (e.g., optimized critical paths causes the speed of emulation to increase) without manual tuning or additional iterations to adjust internal FPGA delays. Furthermore, reducing the frequency at which reperforming emulation is needed due to initial results being low in accuracy also reduces the processing resources consumed by an emulation system. A higher emulation frequency, or emulation clock frequency, enables a faster turnaround in the testing process of user designs, allows more coverage, and lowers cost. For example, coverage can increase because a higher emulation frequency enables more test benches to be run within a given emulation time. Furthermore, some design defects may appear after a long emulation time. With a higher emulation frequency, a cost of time spent finding a design defect can decrease because the higher emulation frequency can reach a clock cycle with a defect faster than with a slower emulation frequency. Yet another way cost is decreased is that an emulation system can be shared by multiple emulation jobs according to a particular job scheduling, and if a job can finish faster, additional jobs can be scheduled. A smaller emulation system can be used to process multiple designs in a scheduling queue; thus, a cost of processing is decreased by using the smaller emulation system that is shared by multiple emulation jobs. FIG.2depicts a DUT200partitioned across FPGAs, according to one embodiment. The DUT200is partitioned across FPGAs A-C and includes registers R1-R4, logic blocks211,212,213,221, and222, and combinatorial paths210and220. The combinatorial path210begins at the “Q” output of register R1, which is referred to herein using the notation “R1.Q,” and ends at the “D” input of register R1, or R4.D. The combinatorial path210includes logic blocks211,212, and213. A logic block may include FPGA primitives (e.g., 4-input LUT (“LUT4”), digital signal processors (DSPs), etc.) and wires, both of which can contribute to the delay of the combinatorial path on which the logic block is connected. The combinatorial path210spans across FPGA A and FPGA B at ports pA3and pB3. The combinatorial path210spans across FPGA B and FPGA C at ports pB4and pC4. The combinatorial path220begins at R2.Q and ends at R3.D. The combinatorial path220includes logic blocks221and222. The combinatorial path220spans across FPGA A and FPGA B at ports pA1and pB1. The DUT200is partitioned into FPGAs A-C, and thus, a global timing graph of the DUT is also split across multiple FPGAs. Combinatorial paths and the corresponding timing paths can be fully contained within FPGA. For example, the combinatorial path from R1.Q to R2.D is fully contained within FPGA A. Combinatorial paths can be split across multiple FPGAs. For example, the combinatorial path from R2.Q to R3.D is split across FPGAs A and B. In both cases, a delay estimation system can traverse a combinatorial path and obtain logic blocks on the combinatorial path that correlate to the certain timing nodes of the global timing graph. The delay estimation system can extract logic blocks on a combinatorial path and data used to describe the delay on the combinatorial path. Such data can include a number of logic levels on the combinatorial path, a total hierarchical distance of wires on the combinatorial path, a sum of fanouts of the wires on the combinatorial path, a timing path type of the combinatorial path, a register primitive fill rate of one or more field programmable gate arrays (FPGAs) through which the combinatorial path spans, a look-up-table (LUT) primitive fill rate of the FPGAs, any suitable feature relevant to the delay of a primitive or wiring of a logic block, or a combination thereof. The delay estimation system can use the combinatorial path, extracted logic blocks on the combinatorial path, and extracted data to estimate the timing path delays for a global timing analysis. For example, the combinatorial path220from R2.Q to R3.D was split into FPGA A and FPGA B, and the delay estimation system can estimate the delay from R2.Q to pA1and the delay from pB1to R3.D separately (e.g., using a delay model). The estimated delay may then be annotated to the global timing graph. FIG.3shows a block diagram300of a delay estimation system, according to one embodiment. The block diagram300includes a delay estimation system310, a host system320, an emulation system330, and a network340. The delay estimation system310may be a remote computing device or server that is communicatively coupled to the host system320through the network340. The host system320may be a computing device that includes a compiler321for compiling a DUT using a netlist from the DUT netlists database311. The host system320may be communicatively coupled to the emulation system330through a local network connection (e.g., as described in the description ofFIG.9). The delay estimation system310can include databases such as a DUT netlists database311and an empirical delay database312. Alternatively or additionally, databases can be located remote from the delay estimation system310(e.g., in a different server that is communicatively coupled to the delay estimation system310and the host system320through the network340). The delay estimation system includes software modules such as a feature vector generation engine313, a model training engine314, a delay model315, and a timing graph generation engine316. The block diagram300may have additional, different, or fewer components than shown inFIG.3. It is noted that a software module may comprise executable program code that may be stored in a non-transitory computer readable storage medium (e.g., a storage device such as a disk or memory) and executable by one or more processing units (e.g., a processor, a controller, state machine). The program code may be packaged with the processing unit to provide a special purpose device corresponding to the function described. Further, it is noted that an engine also may be comprised of executable program code that may be stored in a non-transitory computer readable storage medium (e.g., a storage device such as a disk or memory) and executable by one or more processing units (e.g., a processor, a controller, state machine). The program code may be packaged with the processing unit to provide a special purpose device corresponding to the function described. The DUT netlists database311stores netlists of DUTs for compilation by the compiler321and emulation by the emulation system330. The delay estimation system310may access the netlists in the database311for determining a feature vector via the feature vector generation engine313, determining training data to train the delay model315by the model training engine314, inputting into the delay model315for estimating a delay of a combinatorial path representing a portion of a netlist, or annotating a global timing graph of the netlist via the timing graph generation engine316. A DUT can be mapped into FPGA primitives during the frontend processing phase (e.g., as shown inFIG.1). The DUT netlists database311may also store data describing the mapped primitives and wires to be provided as input for backend phases or delay estimation by the system310. The empirical delay database312stores the measured delays after compiling the FPGA(s) into which the DUT is partitioned. These measured delays can be used by the model training engine314to train and validate the delay model315(e.g., using the primitives and traversed logic blocks along a timing path). Although not depicted, the delay estimation system310may include a database for storing estimated delays output by the delay model315. The stored delays may be in a data structure representing a global timing graph, including a netlist or logic blocks thereof annotated with the estimated delays. The delay estimation system310may provide the estimated delays stored in the database312to the host system320for optimizing partitioning and/or P&R of the DUT during compilation. The feature vector generation engine313generates a feature vector representing data related to a combinatorial path, where the feature vector is input to the delay model315for estimating the delay of the combinatorial path. The feature vector generation engine313may also generate feature vectors for use as training data by the model training engine314. The feature vector generation engine313may generate vectors representing total primitive delays and total wire routing delays, two components that contribute to the total delay of a combinatorial path. A feature vector may include one or more dimensions, or features, where each dimension is a value representing a characteristic of a combinatorial path related to determining its delay. The characteristics can include the number of logic levels on the combinational path, the hierarchical distance on path, the total fanout, the timing path type, the register primitive fill rate of the FPGA, and the LUT primitive fill rate of the FPGA. The characteristics may be chosen such that the dimensions of the feature vectors are orthogonal (e.g., the values of the dimensions are independent of each other). In one example of a three-dimensional feature vector, the feature vector generation engine313generates a feature vector of three values representing the total fanout of wires on a combinatorial path, a register primitive fill rate of one or more of the FPGAs through which the combinatorial path spans, and a number of logic levels on the combinatorial path. The features included in the feature vector generated by the feature vector generation engine313may be obtained after a compiler completes a partitioning phase of the DUT (e.g., backend phase1). The different features that may be included within feature vectors are described in more detail below. The feature vector generation engine313can compute primitive delays based on a sum of delays of each primitive in a combinatorial path. The delay of each primitive can be stable or constant. For example, for primitive such as global clock buffer (BUFG), DSP, or random access memory (RAM), the feature vector generation engine313can determine a constant primitive delay given input and output pin ID combination that is known at the partitioning phase. In some embodiments, primitive delay can be estimated (e.g., using an average delay). For example, for a primitive such as a LUT, although the pin ID is known at the portioning phase, the pin IDs may be swapped during a subsequent phase of compilation. Accordingly, a statistical mean value can be used to estimate the primitive delay for the LUT. The feature vector generation engine313can compute wire routing delays in a combinatorial path. In some embodiments, delays of each wire may vary from wire to wire. However, data describing the combinatorial path and capture the factors that impact the total wire delays may be used to estimate the wire delays. As described previously, the data may include (1) a number of logic levels on the combinatorial path, (2) a total hierarchical distance of wires on the combinatorial path, (3) a sum of fanouts of the wires on the combinatorial path, (4) a timing path type of the combinatorial path, (5) a register primitive fill rate of one or more of the FPGAs through which the combinatorial path spans, (6) LUT primitive fill rate of the one or more FPGAs, any suitable data impacting the total wire delay, or combination thereof. The data may be independent of one another. Any two of the six features identified above may be orthogonal to one another (e.g., a total fanout of a combinatorial path does not depend on a timing path type). Because the feature vector can be composed of dimensions that are orthogonal to one another, the delay estimation system310increases the processing efficiency at which delay is estimated (e.g., by avoiding the use of processing resources on data that is redundant for determining the delay). A number of logic levels on the combinatorial path can represent the logic length of a combinatorial path, where each wire or primitive is one logic level. Total fanout of a combinatorial path can represent the fanout nature of wires in the combinatorial path. The total fanout can be the sum of fanouts on all the wires in the combinatorial path. The register primitive fill rate and the LUT primitive fill rate are FPGA usage features, which are indirect indicators of FPGA usage or congestion impact on routing delays. A timing path type of a combinatorial path represents a difference between path types (e.g., indicating that the combinatorial path type is of a clock path type rather than a data path type). The total hierarchical distance on a path represents the total hierarchical distance of the wires along the combinatorial path. The total hierarchical distance is related to a correlation between a wire driver or load hierarchy and the physical distance in the FPGA placement towards a later stage of the compilation workflow. Specifically, for each wire with a driver-reader pair, the hierarchical distance can be defined as: hier_dist⁢_max=max_diff⁢_hiermax_diff⁢_hier+common_hier where max_diff_hier is the maximum different hierarchy number of the driver and load instances and common_hier is the common hierarchy of the driver and load instances. In one example of determining a total hierarchical distance on a path, one wire connects two instances: a driver instance of “top/a/b/c/d/e” and a reader instance of “top/a/b/c/f/g/h.” The common hierarchy is “top/a/b/c” and the different hierarchy is “d/e” and “f/g/h.” The common hierarchy of the two instances, common_hier, is 4. The maximum different hierarchy number of the two instances is defined as the larger number among the different hierarchies, which is max(d/e, f/g/h), or 3 in this example. The hier_dist_max as thus 3/(3+4)=3/7. The total hierarchical distance is a sum of the hier_dist_max of each wire on the combinatorial path. Similarly, a minimum hierarchical distance can be a value that is included in addition or alternatively to the maximum hierarchical distance for use in the generated feature vector. Depending on the design size, type, or partitioning results, the number of timing paths across each FPGA can be large (e.g., ranging from ten thousand to one hundred thousand paths). In an experiment conducted to develop a delay model, 42 designs of various sizes and a total of 2.1 k FPGAs were analyzed, which resulted in about 9.3 million combinatorial paths, each having a corresponding timing path. In this experiment, the 9.3 million combinatorial paths were used to generate a training dataset for the delay model and a random forest algorithm was used to develop the delay model. Weights were determined for each feature in the vector, as shown in Table 1 below. TABLE 1Example weights for delay model featuresFeatureFeature weight(1)num_logic_levels0.63(2)hier_dist_max_path0.3(3)total_fanouts0.03(4)timing_path_type0.02(5)reg_fill_rate0.01(6)lut_fill_rate0.01 Although a random forest algorithm was used to develop the delay model in the experiment, the delay model315may use various machine learning techniques such as linear support vector machine (linear SVM), boosting for other algorithms (e.g., AdaBoost), neural networks, logistic regression, naïve Bayes, memory-based learning, bagged trees, decision trees, boosted trees, boosted stumps, a supervised or unsupervised learning algorithm, or any suitable combination thereof. The model training engine314may train the delay model315using feature vectors generated by the feature vector generation engine313and validate the delay model315. To train the delay model315, the model training engine314may generate a first training data set using combinatorial paths of compiled DUTs and measured delays of the combinatorial paths. The training data may feature vectors generated using information about the combinatorial path (e.g., including the six features described with respect to the feature vector generation engine313). The feature vectors may be labeled with the measured delay of the corresponding combinatorial path that is represented by the feature vector. The model training engine314may train the delay model315using the first training data set. The model training engine314may retrain the delay model315using a second training data set. The delay estimation system310may generate a timing graph generated using the delay model315trained using the first training data set, compile a DUT using the timing graph, and subsequently receive a measured delay of a combinatorial path of the compiled DUT. The model training engine314may create a second training data set using the combinatorial path and the subsequently received measured delay. In one example of retraining the model315, the model training engine314adjusts weights corresponding to dimensions of feature vectors (e.g., the weights shown in Table 1). The model training engine314may generate the second training data set using the adjusted weights and a feature vector of the six features of the combinatorial path, where the feature vector is labeled with the subsequently received measured delay. In one example of validating the delay model315, combinatorial paths on half of compiled FPGAs may be used as a training set and the remaining combinatorial paths may be used for validation. A Random Forest algorithm may be used to determine an R2score and a room mean square error (RMSE) to validate the delay model315. For example, an R2score of 91% and an RMSE at 10416 nanoseconds were determined for the delay model whose experimental results are depicted in theFIG.7. The delay model315outputs a delay caused by a particular configuration of a DUT determined during compilation (e.g., a particular FPGA partition or a particular place and routing of FPGAs). The delay model315may output delays for a combinatorial path of the DUT, a logic block on a combinatorial path, or combination thereof. The delay model315may output an estimate of a wire delay or an estimate of a total combinatorial path delay. In one example of outputting an estimated wire delay, the delay model315can receive, as input, a feature vector representing a combinatorial path, where the feature vector includes the six features described in the description of the feature vector generation engine313. The delay model315may then output an estimated wire delay, as the six features represent a wire delay of the combinatorial path. The estimated wire delay may then be combined with a primitive delay of the combinatorial path to determine a total combinatorial path delay (e.g., for including in a timing graph). In an example of outputting an estimated total combinatorial path delay, the delay model315may receive, as input, a feature vector including the six features and a primitive delay of the combinatorial path. Using this example feature vector of seven dimensions, the delay model315may output an estimate of the total combinatorial path delay of the combinatorial path. The timing graph generation engine316may generate a timing graph for a DUT. A timing graph may include timing nodes that correspond to components contributing to the delay of a combinatorial path. For example, the timing graph generation engine316may receive estimated delays of logic blocks output by the delay model315and annotate corresponding timing nodes in the timing graph. In another example, the timing graph generation engine316may receive estimated delays of combinatorial paths and annotate timing paths corresponding to one or more timing nodes in a timing graph. The timing graph generation engine316may receive logic blocks of a DUT and a combinatorial path connecting one or more of the logic blocks (e.g., from the netlist database311or from a compiler). The timing graph generation engine316applies the delay model315to a feature vector representing the combinatorial path, where the feature vector may be generated by the feature vector generation engine313). The timing graph generation engine316can generate a timing graph based on a delay of the combinatorial path as determined by the delay model315. In some embodiments, the delay estimation system310determines true critical paths of a DUT. A critical path may be a combinatorial path that has a greater delay than one or more other combinatorial paths of a DUT. Delay that is determined without applying the delay model315may be inaccurate and cause critical paths to be incorrectly determined, leaving true critical paths unoptimized because the delay was not flagged to a compiler as needing resources to minimize (e.g., P&R to determine a time division multiplexing (TDM) ratio that would allocate more wires to decrease the delay on the true critical path). UsingFIG.2as an example of determining critical paths, a true critical path may be combinatorial path220while the combinatorial path210may have been incorrectly determined to be a critical path. This may happen if the delay is determined solely based on the number of FPGAs that a combinatorial path traverses. Because the combinatorial path210traverses FPGA A-C while the combinatorial path220traverses FPGAs A and B, the combinatorial path210may be determined to have more delay than the combinatorial path220. However, taking account attributes of the combinatorial path (e.g., the primitives and wiring delays within logic blocks211-213,221, and222), the delay estimation system310may determine that the delay of combinatorial path220is greater than the delay of combinatorial path210. Hence, the true critical path is the combinatorial path220. The compiler may then use the delays determined by the delay estimation system310to allocate TDM ratios accordingly (e.g., a greater TDM ratio to combinatorial path220than is allocated to the combinatorial path210). The network340may serve to communicatively couple the delay estimation system310and the host system320. In some embodiments, the network340includes any combination of local area and/or wide area networks, using wired and/or wireless communication systems. The network340may use standard communications technologies and/or protocols. For example, the network340includes communication links using technologies such as Ethernet, 802.11, worldwide interoperability for microwave access (WiMAX), 3G, 4G, 5G, code division multiple access (CDMA), digital subscriber line (DSL), etc. Examples of networking protocols used for communicating via the network340include multiprotocol label switching (MPLS), transmission control protocol/Internet protocol (TCP/IP), hypertext transport protocol (HTTP), simple mail transfer protocol (SMTP), and file transfer protocol (FTP). Data exchanged over the network may be represented using any suitable format, such as hypertext markup language (HTML) or extensible markup language (XML). In some embodiments, all or some of the communication links of the network340may be encrypted using any suitable technique or techniques. FIG.4illustrates a block diagram of a process400for compiling a DUT using delay estimates, according to one embodiment. In some embodiments, the timing graph generation engine316may generate a timing graph without a delay model. For example, a fixed delay estimate (e.g., a conservative, fixed delay) or a logic-level-count-based predictor can be used to determine a timing graph for a DUT to be compiled. The estimated delays can be applied to partition a DUT across FPGAs and perform P&R among the FPGAs before performing P&R within the FPGAs. FIG.5illustrates a block diagram of a process500for training a delay model of a delay estimation system, according to one embodiment. The process500is similar to the process100depicted inFIG.1. The process500differs in the addition of empirical delays510being provided to the delay estimation system310. The delay estimation system310may receive the empirical delays510of combinatorial paths measured after the DUT is compiled across one or more FPGAs. The empirical delays510are stored in the empirical delay database312of the delay estimation system310for use to train the delay model315. The training of the delay model315is described in the description of the model training engine314ofFIG.3. FIG.6illustrates a block diagram of a process600for compiling a DUT using delay estimates determined by the delay estimation system310, according to one embodiment. The process600may occur following the process500in which the delay model315is trained using the empirical delays510. In the process600, the delay model315of the delay estimation system310is applied to data related to the combinatorial paths of which delay is estimated. The delay estimation system310may receive data following the partitioning phase of the backend compilation workflow, where the data includes information related to the logical blocks of a DUT that is partitioned into FPGAs, the FPGA fill rate(s), primitives, netlist hierarchy, any suitable information related to the impact of wire delays (e.g., timing path types, number of logic levels of a combinatorial path, hierarchical distance of a combinatorial path, register fill rate, or LUT fill rate), or a combination thereof. The received data may be used by the feature vector generation engine313to generate a feature vector for input to the delay model315. The delay model315outputs an estimated delay for a combinatorial path of the partitioned DUT, where the combinatorial path corresponds to a timing path and thus, the output delay is also the delay for the timing path. The estimated delays may be used to generate a timing graph with the delays annotating timing nodes or paths of the timing graph. The estimated delays of the timing graph can be used to re-partition the DUT among FPGAs to reduce the delays of each of the newly partitioned FPGAs. The estimated delays can be used to perform P&R among the FPGAs to reduce delays caused by connections between FPGAs. FIG.7shows experimental results700comparing examples of measured delays of paths of a compiled DUT against estimated delays of the paths using a delay estimation system. Application of the delay model to backend phases of the compilation workflow may improve the performance of the compiled DUT by reducing delays by, for example, 5-20% due to the accuracy of the combinatorial path delays output by the delay model for use in optimizing the partitioning and P&R of the DUT during compilation. Another result of the experiment showed that the DUT emulation frequency reported at the second backend phase was closer to the performance reported after the third backend phase. In particular, the accuracy of the delay model described herein is shown by the experimental results700. The results700show that the range of estimated delay increases as the actual delay increases. However, the estimated delays track the actual delay with an R2score of 91% and an RMSE of 10416 ns. FIG.8depicts a flowchart of a process800for determining a timing graph for P&R using delay estimates determined by a delay estimation system, e.g.,310, according to one embodiment. The process800may be performed by the delay estimation system310. The process800may include additional, fewer, or different operations. The delay estimation system310receives802logic blocks of a DUT and a combinatorial path connecting one or more of the logic blocks. For example, the timing graph generation engine316receives802a netlist including the logic blocks221and222of combinatorial path220and receives the combinatorial path220(e.g., information about the combinatorial path such as its logic length, fanout nature of wires in the combinatorial path, etc.). The delay estimation system310applies804a delay model to a feature vector representing the combinatorial path. For example, the timing graph generation engine316applies804the delay model315to a feature vector generated by the feature vector generation engine313. The delay estimation system310generates806a timing graph based on the delay of the combinatorial path. For example, the timing graph generation engine316receives the estimated delay output by the delay model315, where the estimated delay corresponds to an estimated wire delay of wires within the logic block221. The timing graph generation engine316determines a primitive delay based on the primitives included in the logic block221and determines a sum of the primitive delay and the estimated wire delay for the logic block221. The timing graph generation engine316may similarly determine a sum of the primitive and wire delays for the logic block222. The timing graph generation engine316may combine the primitive and wire delays for both logic blocks221and222and a delay corresponding to the portion of the combinatorial path between pA1and pB1to determine the total combinatorial path delay of the combinatorial path220. The timing graph generation engine316generates806a timing graph that can include the primitive delays of logic blocks, wiring delays of logic blocks, delays of connections between FPGAs (e.g., between pA1and pB1), total combinatorial path delays, or any combination thereof. The delay estimation system310provides808the timing graph to a compiler to perform the placement and routing of the DUT. For example, the timing graph generation engine316provides the timing graph including the delays for combinatorial paths210and220to the compiler321to perform P&R of the FPGAs into which the DUT is partitioned. In one example of P&R, the FPGA A-C may be placed and routed in a different configuration from the configuration shown inFIG.2. FIG.9depicts a diagram of an example emulation environment900. An emulation environment900may be configured to verify the functionality of the circuit design. The emulation environment900may include a host system907(e.g., a computer that is part of an electronic design automation (EDA) system) and an emulation system902(e.g., a set of programmable devices such as Field Programmable Gate Arrays (FPGAs) or processors). The host system generates data and information by using a compiler910to structure the emulation system to emulate a circuit design. A circuit design to be emulated is also referred to as a Design Under Test (DUT) where data and information from the emulation are used to verify the functionality of the DUT. The host system907may include one or more processors. In the embodiment where the host system includes multiple processors, the functions described herein as being performed by the host system can be distributed among the multiple processors. The host system907may include a compiler910to transform specifications written in a description language that represents a DUT and to produce data (e.g., binary data) and information that is used to structure the emulation system902to emulate the DUT. The compiler910can transform, change, restructure, add new functions to, and/or control the timing of the DUT. The host system907and emulation system902exchange data and information using signals carried by an emulation connection. The connection can be, but is not limited to, one or more electrical cables such as cables with pin structures compatible with the Recommended Standard 232 (RS232) or universal serial bus (USB) protocols. The connection can be a wired communication medium or network such as a local area network or a wide area network such as the Internet. The connection can be a wireless communication medium or a network with one or more points of access using a wireless protocol such as BLUETOOTH or IEEE 802.11. The host system907and emulation system902can exchange data and information through a third device such as a network server. The emulation system902includes multiple FPGAs (or other modules) such as FPGAs9041and9042as well as additional FPGAs to904N. Each FPGA can include one or more FPGA interfaces through which the FPGA is connected to other FPGAs (and potentially other emulation components) for the FPGAs to exchange signals. An FPGA interface can be referred to as an input/output pin or an FPGA pad. While an emulator may include FPGAs, embodiments of emulators can include other types of logic blocks instead of, or along with, the FPGAs for emulating DUTs. For example, the emulation system902can include custom FPGAs, specialized ASICs for emulation or prototyping, memories, and input/output devices. A programmable device can include an array of programmable logic blocks and a hierarchy of interconnections that can enable the programmable logic blocks to be interconnected according to the descriptions in the HDL code. Each of the programmable logic blocks can enable complex combinational functions or enable logic gates such as AND, and XOR logic blocks. In some embodiments, the logic blocks also can include memory elements/devices, which can be simple latches, flip-flops, or other blocks of memory. Depending on the length of the interconnections between different logic blocks, signals can arrive at input terminals of the logic blocks at different times and thus may be temporarily stored in the memory elements/devices. FPGAs9041-904Nmay be placed onto one or more boards9121and9122as well as additional boards through912M. Multiple boards can be placed into an emulation unit9141. The boards within an emulation unit can be connected using the backplane of the emulation unit or any other types of connections. In addition, multiple emulation units (e.g.,9141and9142through914K) can be connected to each other by cables or any other means to form a multi-emulation unit system. For a DUT that is to be emulated, the host system907transmits one or more bit files to the emulation system902. The bit files may specify a description of the DUT and may further specify partitions of the DUT created by the host system907with trace and injection logic, mappings of the partitions to the FPGAs of the emulator, and design constraints. Using the bit files, the emulator structures the FPGAs to perform the functions of the DUT. In some embodiments, one or more FPGAs of the emulators may have the trace and injection logic built into the silicon of the FPGA. In such an embodiment, the FPGAs may not be structured by the host system to emulate trace and injection logic. The host system907receives a description of a DUT that is to be emulated. In some embodiments, the DUT description is in a description language (e.g., a register transfer language (RTL)). In some embodiments, the DUT description is in netlist level files or a mix of netlist level files and HDL files. If part of the DUT description or the entire DUT description is in an HDL, then the host system can synthesize the DUT description to create a gate level netlist using the DUT description. A host system can use the netlist of the DUT to partition the DUT into multiple partitions where one or more of the partitions include trace and injection logic. The trace and injection logic traces interface signals that are exchanged via the interfaces of an FPGA. Additionally, the trace and injection logic can inject traced interface signals into the logic of the FPGA. The host system maps each partition to an FPGA of the emulator. In some embodiments, the trace and injection logic is included in select partitions for a group of FPGAs. The trace and injection logic can be built into one or more of the FPGAs of an emulator. The host system can synthesize multiplexers to be mapped into the FPGAs. The multiplexers can be used by the trace and injection logic to inject interface signals into the DUT logic. The host system creates bit files describing each partition of the DUT and the mapping of the partitions to the FPGAs. For partitions in which trace and injection logic are included, the bit files also describe the logic that is included. The bit files can include place and route information and design constraints. The host system stores the bit files and information describing which FPGAs are to emulate each component of the DUT (e.g., to which FPGAs each component is mapped). Upon request, the host system transmits the bit files to the emulator. The host system signals the emulator to start the emulation of the DUT. During emulation of the DUT or at the end of the emulation, the host system receives emulation results from the emulator through the emulation connection. Emulation results are data and information generated by the emulator during the emulation of the DUT which include interface signals and states of interface signals that have been traced by the trace and injection logic of each FPGA. The host system can store the emulation results and/or transmits the emulation results to another processing system. After emulation of the DUT, a circuit designer can request to debug a component of the DUT. If such a request is made, the circuit designer can specify a time period of the emulation to debug. The host system identifies which FPGAs are emulating the component using the stored information. The host system retrieves stored interface signals associated with the time period and traced by the trace and injection logic of each identified FPGA. The host system signals the emulator to re-emulate the identified FPGAs. The host system transmits the retrieved interface signals to the emulator to re-emulate the component for the specified time period. The trace and injection logic of each identified FPGA injects its respective interface signals received from the host system into the logic of the DUT mapped to the FPGA. In case of multiple re-emulations of an FPGA, merging the results produces a full debug view. The host system receives, from the emulation system, signals traced by logic of the identified FPGAs during the re-emulation of the component. The host system stores the signals received from the emulator. The signals traced during the re-emulation can have a higher sampling rate than the sampling rate during the initial emulation. For example, in the initial emulation a traced signal can include a saved state of the component every X milliseconds. However, in the re-emulation the traced signal can include a saved state every Y milliseconds where Y is less than X. If the circuit designer requests to view a waveform of a signal traced during the re-emulation, the host system can retrieve the stored signal and display a plot of the signal. For example, the host system can generate a waveform of the signal. Afterwards, the circuit designer can request to re-emulate the same component for a different time period or to re-emulate another component. A host system907and/or the compiler910may include sub-systems such as, but not limited to, a design synthesizer sub-system, a mapping sub-system, a run time sub-system, a results sub-system, a debug sub-system, a waveform sub-system, and a storage sub-system. The sub-systems can be structured and enabled as individual or multiple modules or two or more may be structured as a module. Together these sub-systems structure the emulator and monitor the emulation results. The design synthesizer sub-system transforms the HDL that is representing a DUT905into gate level logic. For a DUT that is to be emulated, the design synthesizer sub-system receives a description of the DUT. If the description of the DUT is fully or partially in HDL (e.g., RTL or other level of representation), the design synthesizer sub-system synthesizes the HDL of the DUT to create a gate-level netlist with a description of the DUT in terms of gate level logic. The mapping sub-system partitions DUTs and maps the partitions into emulator FPGAs. The mapping sub-system partitions a DUT at the gate level into a number of partitions using the netlist of the DUT. For each partition, the mapping sub-system retrieves a gate level description of the trace and injection logic and adds the logic to the partition. As described above, the trace and injection logic included in a partition is used to trace signals exchanged via the interfaces of an FPGA to which the partition is mapped (trace interface signals). The trace and injection logic can be added to the DUT prior to the partitioning. For example, the trace and injection logic can be added by the design synthesizer sub-system prior to or after the synthesizing the HDL of the DUT. In addition to including the trace and injection logic, the mapping sub-system can include additional tracing logic in a partition to trace the states of certain DUT components that are not traced by the trace and injection. The mapping sub-system can include the additional tracing logic in the DUT prior to the partitioning or in partitions after the partitioning. The design synthesizer sub-system can include the additional tracing logic in an HDL description of the DUT prior to synthesizing the HDL description. The mapping sub-system maps each partition of the DUT to an FPGA of the emulator. For partitioning and mapping, the mapping sub-system uses design rules, design constraints (e.g., timing or logic constraints), and information about the emulator. For components of the DUT, the mapping sub-system stores information in the storage sub-system describing which FPGAs are to emulate each component. Using the partitioning and the mapping, the mapping sub-system generates one or more bit files that describe the created partitions and the mapping of logic to each FPGA of the emulator. The bit files can include additional information such as constraints of the DUT and routing information of connections between FPGAs and connections within each FPGA. The mapping sub-system can generate a bit file for each partition of the DUT and can store the bit file in the storage sub-system. Upon request from a circuit designer, the mapping sub-system transmits the bit files to the emulator, and the emulator can use the bit files to structure the FPGAs to emulate the DUT. If the emulator includes specialized ASICs that include the trace and injection logic, the mapping sub-system can generate a specific structure that connects the specialized ASICs to the DUT. In some embodiments, the mapping sub-system can save the information of the traced/injected signal and where the information is stored on the specialized ASIC. The run time sub-system controls emulations performed by the emulator. The run time sub-system can cause the emulator to start or stop executing an emulation. Additionally, the run time sub-system can provide input signals and data to the emulator. The input signals can be provided directly to the emulator through the connection or indirectly through other input signal devices. For example, the host system can control an input signal device to provide the input signals to the emulator. The input signal device can be, for example, a test board (directly or through cables), signal generator, another emulator, or another host system. The results sub-system processes emulation results generated by the emulator. During emulation and/or after completing the emulation, the results sub-system receives emulation results from the emulator generated during the emulation. The emulation results include signals traced during the emulation. Specifically, the emulation results include interface signals traced by the trace and injection logic emulated by each FPGA and can include signals traced by additional logic included in the DUT. Each traced signal can span multiple cycles of the emulation. A traced signal includes multiple states and each state is associated with a time of the emulation. The results sub-system stores the traced signals in the storage sub-system. For each stored signal, the results sub-system can store information indicating which FPGA generated the traced signal. The debug sub-system allows circuit designers to debug DUT components. After the emulator has emulated a DUT and the results sub-system has received the interface signals traced by the trace and injection logic during the emulation, a circuit designer can request to debug a component of the DUT by re-emulating the component for a specific time period. In a request to debug a component, the circuit designer identifies the component and indicates a time period of the emulation to debug. The circuit designer's request can include a sampling rate that indicates how often states of debugged components should be saved by logic that traces signals. The debug sub-system identifies one or more FPGAs of the emulator that are emulating the component using the information stored by the mapping sub-system in the storage sub-system. For each identified FPGA, the debug sub-system retrieves, from the storage sub-system, interface signals traced by the trace and injection logic of the FPGA during the time period indicated by the circuit designer. For example, the debug sub-system retrieves states traced by the trace and injection logic that are associated with the time period. The debug sub-system transmits the retrieved interface signals to the emulator. The debug sub-system instructs the debug sub-system to use the identified FPGAs and for the trace and injection logic of each identified FPGA to inject its respective traced signals into logic of the FPGA to re-emulate the component for the requested time period. The debug sub-system can further transmit the sampling rate provided by the circuit designer to the emulator so that the tracing logic traces states at the proper intervals. To debug the component, the emulator can use the FPGAs to which the component has been mapped. Additionally, the re-emulation of the component can be performed at any point specified by the circuit designer. For an identified FPGA, the debug sub-system can transmit instructions to the emulator to load multiple emulator FPGAs with the same configuration of the identified FPGA. The debug sub-system additionally signals the emulator to use the multiple FPGAs in parallel. Each FPGA from the multiple FPGAs is used with a different time window of the interface signals to generate a larger time window in a shorter amount of time. For example, the identified FPGA can require an hour or more to use a certain amount of cycles. However, if multiple FPGAs have the same data and structure of the identified FPGA and each of these FPGAs runs a subset of the cycles, the emulator can require a few minutes for the FPGAs to collectively use all the cycles. A circuit designer can identify a hierarchy or a list of DUT signals to re-emulate. To enable this, the debug sub-system determines the FPGA needed to emulate the hierarchy or list of signals, retrieves the necessary interface signals, and transmits the retrieved interface signals to the emulator for re-emulation. Thus, a circuit designer can identify any element (e.g., component, device, or signal) of the DUT to debug/re-emulate. The waveform sub-system generates waveforms using the traced signals. If a circuit designer requests to view a waveform of a signal traced during an emulation run, the host system retrieves the signal from the storage sub-system. The waveform sub-system displays a plot of the signal. For one or more signals, when the signals are received from the emulator, the waveform sub-system can automatically generate the plots of the signals. FIG.10illustrates an example machine of a computer system1000within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed. In alternative implementations, the machine may be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, and/or the Internet. The machine may operate in the capacity of a server or a client machine in client-server network environment, as a peer machine in a peer-to-peer (or distributed) network environment, or as a server or a client machine in a cloud computing infrastructure or environment. The machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, a switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. The example computer system1000includes a processing device1002, a main memory1004(e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM), a static memory1006(e.g., flash memory, static random access memory (SRAM), etc.), and a data storage device1018, which communicate with each other via a bus1030. Processing device1002represents one or more processors such as a microprocessor, a central processing unit, or the like. More particularly, the processing device may be complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device1002may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device1002may be configured to execute instructions1026for performing the operations and steps described herein. The computer system1000may further include a network interface device1008to communicate over the network1020. The computer system1000also may include a video display unit1010(e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device1012(e.g., a keyboard), a cursor control device1014(e.g., a mouse), a graphics processing unit1022, a signal generation device1016(e.g., a speaker), graphics processing unit1022, video processing unit1028, and audio processing unit1032. The data storage device1018may include a machine-readable storage medium1024(also known as a non-transitory computer-readable medium) on which is stored one or more sets of instructions1026or software embodying any one or more of the methodologies or functions described herein. The instructions1026may also reside, completely or at least partially, within the main memory1004and/or within the processing device1002during execution thereof by the computer system1000, the main memory1004and the processing device1002also constituting machine-readable storage media. In some implementations, the instructions1026include instructions to implement functionality corresponding to the present disclosure. While the machine-readable storage medium1024is shown in an example implementation to be a single medium, the term “machine-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “machine-readable storage medium” shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine and the processing device1002to perform any one or more of the methodologies of the present disclosure. The term “machine-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media. ADDITIONAL CONFIGURATION CONSIDERATIONS Example benefits and advantages of the disclosed configurations include increasing the accuracy with which a delay of a combinational path within a DUT is estimated, increasing the speed at which a DUT is emulated due to compiler partitioning and P&R that are both improved as the accuracy of delays increases (i.e., decreasing the processing cycles needed by an emulator when emulating the compiled the DUT), and decreasing the processing resources consumed to estimate a delay of a combinational path within the DUT. To decrease processing resources needed to determine a combinational path, the delay estimation system described herein uses feature vectors whose dimensions are orthogonal to one another. By using dimensions that avoid redundant information (e.g., data about a combinational path in one feature can be derived from another feature), the delay estimation system increases the accuracy by which the delay is generated (e.g., additional, non-redundant information increases the system's ability to distinguish between different combinational paths and corresponding delays) while simultaneously not wasting processing resources to process redundant information. By providing a more accurate delay estimate at early backend phases of a compilation workflow, the delay estimation system allows a compiler to focus on optimizing true critical paths of a DUT rather than incorrectly flagged critical paths whose delays are not as large as the true critical paths' delays. Thus, the delay estimation system can improve DUT emulation (e.g., optimized critical paths causes the speed of emulation to increase) without manual tuning or additional iterations to adjust internal FPGA delays. Furthermore, reducing the frequency at which reperforming emulation is needed due to initial results being low in accuracy also reduces the processing resources consumed by an emulation system. A higher emulation frequency enables a faster turnaround in the testing process of user designs, allows more coverage, and lowers cost. Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm may be a sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Such quantities may take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. Such signals may be referred to as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the present disclosure, it is appreciated that throughout the description, certain terms refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage devices. The present disclosure also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the intended purposes, or it may include a computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus. The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various other systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct a more specialized apparatus to perform the method. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the disclosure as described herein. The present disclosure may be provided as a computer program product, or software, that may include a machine-readable medium having stored thereon instructions, which may be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). For example, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium such as a read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices, etc. In the foregoing disclosure, implementations of the disclosure have been described with reference to specific example implementations thereof. It will be evident that various modifications may be made thereto without departing from the broader spirit and scope of implementations of the disclosure as set forth in the following claims. Where the disclosure refers to some elements in the singular tense, more than one element can be depicted in the figures and like elements are labeled with like numerals. The disclosure and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense.
66,169
11860228
To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. It is contemplated that elements of one example may be beneficially incorporated in other examples. DETAILED DESCRIPTION In a system including multiple integrated circuit (IC) chips, testing data is communicated from a testing interface of a main IC chip to one or more auxiliary IC chips via multiple signal wires. The wires (traces) connect the main IC chip with each of the auxiliary IC chips. The testing interface is a Joint Test Action Group (JTAG) interface or an internal JTAG (iJTAG) interface. In one example, to reduce the number of wires that connect the main IC chip and each auxiliary IC chip, at least a portion of the test data is encoded by the main IC chip before being communicated to the auxiliary IC chips or within the main IC chip. Encoded test data uses allows for an increased amount of test data to be communicated over a smaller number of wires, as compared to conventional methods. Accordingly, the number of wires within the test interface is reduced. Reducing the number wires decreases the cost of the corresponding device and increases the amount of routing available for other signals within the multiple IC chip device. However, when the encoded test data is decoded, errors may be introduced. In one example, the test data includes dynamic data and static data. Dynamic data is data that changes values over a period of time. Example dynamic data is clock data (e.g., clock signals), and program-counter data, among others. Static data is data that does not change values, or remains substantially constant, over the period of time during which the dynamic data changes values. Example static data includes finite state (FSM) indicator data, and instruction data, among others. In one examples, the static data is encoded and communicated within an IC chip and between IC chips, while the dynamic data is not encoded. Encoding the static data reduces the number of wires connecting the IC chips to each other, decreasing the manufacturing costs of the corresponding device. Further, communicating the dynamic data in a non-encoded state reduces errors as compared to systems that encode the dynamic data. FIG.1illustrates an IC chip device100, according to one or more examples. The IC chip device100includes a main IC chip (e.g., anchor IC chip)110, and an auxiliary IC chip (e.g., chiplet). The IC chip device100is illustrated as having two IC chips (e.g., the main IC chip110and the auxiliary IC chip120). However, in other examples, the IC chip device100includes more than two IC chips. For example, the main IC chip110may be connected to more than one auxiliary IC chip. In one example, the main IC chip110and the auxiliary IC chip120are disposed on a common substrate (e.g., an interposer or another substrate device). In another example, the auxiliary IC chip120is mounted to the main IC chip110, forming a three-dimensional IC chip stack. The main IC chip110includes circuit blocks, such as power supply controllers and memory controllers, among others. The main IC chip110is an application specific IC (ASIC) or a programmable IC (e.g., a field programmable gate array (FPGA)). The main IC chip110includes a test access port (TAP) controller112. The TAP controller112includes encoder circuitry114. The TAP controller112is connected to testing circuitry116. The testing circuitry116includes one or more controllers, boundary-scan cells, and registers. Further, the testing circuitry116includes the decoder circuitry150. The testing circuitry116is used to perform tests within the main IC chip110. For example, the testing circuitry116is used to determine connectivity and data errors within the main IC chip110. The main IC chip110may further include transmitter circuitry, receiver circuitry, and/or other devices. The auxiliary IC chip120may be a hardware accelerator, artificial intelligence (AI) engine, and/or a transceiver engine, among others. The use of the auxiliary IC chip120with a main IC chip110de-couples the development cycle of the main IC chip110from auxiliary IC chips (e.g., the auxiliary IC chip120). Further, the use of a main IC chip110with auxiliary chips120allows for different types auxiliary chips to be used with a main IC chip in different configurations. In an example including multiple auxiliary chips120, multiple different types of IC chips are connected to the main IC chip. The auxiliary IC chip120is an ASIC or a programmable IC. The auxiliary IC chip120includes a TAP controller122. The TAP controller122includes decoder circuitry150. The auxiliary IC chip120further includes testing circuitry126. The testing circuitry126is used to perform tests within the auxiliary IC chip120. For example, the testing circuitry126is used to determine connectivity and data errors within the auxiliary IC chip120. In one examples, the testing circuitry126includes decoder circuitry150. Further, the testing circuitry126includes one or more controllers, boundary-scan cells, and registers. The testing circuitry126is used to perform tests within the main IC chip110. The auxiliary IC chip120further includes transmitter circuitry, receiver circuitry, and/or other circuit devices. In one example, the TAP controller122omits the decoder circuitry150. The main IC chip110is connected to the auxiliary IC chip120via the wires130. The wires130include wires131-135. The wires130are routed within an interposer or another substrate. In one example, the wires130are routed in one or more layers of the same interposer or substrate on which the main IC chip110and the auxiliary IC chip120are mounted. In examples that include more than one auxiliary IC chip120, the main IC chip110is connected to each of the auxiliary IC chips via wires configured similar to that of the wires130. In one example, the TAP controller112, the testing circuitry116, the TAP controller122, the testing circuitry126, and the wires130form a testing interface140. In one example, the testing interface is a JTAG interface or an iJTAG interface. As will be described in greater detail in the following, the testing circuitry116and the testing circuitry126tests interoperability among elements of the corresponding IC chip. For example, the testing circuitry116and126include boundary-scan cells that are used to test the input connections of the elements of the IC chips110and120, the output connections of the elements of the IC chips110and120, and bi-directional connections of the elements of the IC chips110and120. In one or more examples, the boundary scan cells within each of the testing circuitries116and126are connected together to form a shift register in the respective IC chip. The boundary scan cells are accessed through a test data in (TDI) input and the TDI signal received by the corresponding TAP controller. In one example, the TAP controller112receives test data including a test clock (TCK) signal, a test mode select (TMS) signal, and a TDI signal. The TCK signal is a clock control signal. The TMS signal controls the functionality of the testing interface140. The TDI signal includes data corresponding to the type of test to be performed. The TAP controller112outputs the test data out (TDO) signal. In one example, the TDI signal indicates a neither ijtag/bscan instruction, ijtag instruction, a EXTEST instruction, a SAMPLE instruction, a BYPASS instruction, EXTEST_SMPL instruction, EXTEST_PULSE instruction, EXTEST_TRAIN instruction, a high-z instruction, and/or a block ijtag reset_tap_b instruction. In one or more examples, the TDI signal may be indicative of other instructions. The neither ijtag/bscan instruction corresponds to an instruction where neither an ijtag or a boundary scan test is performed. The EXTEST instruction corresponds to a test in which the boundary-scan cells are used to test the interconnect structure between devices of an IC chip. The SAMPLE instruction selects the boundary-scan register and sets up the boundary-scan cells to sample values within the IC chip. A SAMPLE instruction may also include a PRELOAD instruction that is used to preload known values into the output boundary-scan cells prior to a follow-on operation. The BYPASS instruction bypasses one or more elements of the IC chip to test other elements of the IC chip. The EXTEST_SMPL instruction samples the data on the boundary-scan cells. The EXTEST_PULSE instruction generates a single pulse to the boundary-scan cells. The EXTEST_TRAIN generates a stream of pulses to the boundary-scan cells. A high-z instruction places the boundary-scan cells in a three state mode or an input receive mode. A block ijtag reset_tap_b instruction resets the corresponding TAP controller. In one or more examples, the instructions are loaded into an instruction register of the TAP controller (e.g., the TAP controller112or the TAP controller122) before being loaded into the boundary scan cells based on the TMS signal and the TCK signal. The encoder circuitry114receives the TDI signal and encodes the TDI signal into encoded instruction signals (jtag_enc[0] signal, jtag_enc[1] signal, and the jtag_enc[2] signal). The encoded signals jtag_enc[0] signal, jtag_enc[1] signal, and the jtag_enc[2] signal are communicated via wires133,134, and135respectively. The encoded signals jtag_enc[0] signal, jtag_enc[1] signal, and the jtag_enc[2] are each two bit (binary) signals. Further, the TCK signal and the TMS signal are communicated from the TAP controller112to the TAP controller122via the wires131and132, respectively. The wires131and132are shielded, while the wires133-135are not shielded. For example, the wires131and132may be shielded on both sides. Further, the wires131and132may be disposed in a metal layer different from that of the metal layers133-135. In one example, the wires131and132are formed in a metal layer above that of the metal layers133-135. The TCK signal and the TMS signal are not encoded. The TAP controller112acts as a pass-through, passing the TCK signal and the TMS signal to the TAP controller122in a non-encoded state. In one example, the encoded signals jtag_enc[0] signal, jtag_enc[1] signal, and the jtag_enc[2] are communicated at a lower speed than of the the TCK signal and the TMS signal. In a typical iJTAG implementation, the iJTAG control signals (e.g., TCK signal and TMS signal) have a half cycle setup time and half cycle hold time even though the iJTAG control signals travel across the entire corresponding IC chip. However, as the encoded signals transition less frequently than the iJTAG control signals TCK and TMS, the encoded signals can be transmitted at a lower frequency than the TCK and TMS signals without the use of very high metal layer resources within the corresponding IC chip device, and without be required to meet the half cycle timing constraint. Accordingly, an IC chip device that employs encoded signals as described herein has improved performance as compared to an IC chip device that does not employ encoded signals as described herein. Further, an IC chip device that employs encoded signals as described herein uses a reduced number of high cost wires and corresponding shielding as compared to an IC chip device that does not employ encoded signals as described herein. In one or more examples, the encoded signals jtag_enc[0], jtag_enc[1], and jtag_enc[2] are multi-bit signals. For example, the encoded signals jtag_enc[0], jtag_enc[1], and jtag_enc[2] are binary, trinary, or multi-bit signals having greater than three bits. The encoded signals may be referred together as encoded signal jtag_ir_enc[2:0]. Each bit of the encoded signal jtag_ir_enc[2:0] corresponds to a respective one of the encoded signals jtag_enc[0], jtag_enc[1], and jtag_enc[2]. In one example, the tap controller112determines the type of instruction from the TDI signal. The encoder circuitry114generates the encoded signals jtag_enc[0], jtag_enc[1], and jtag_enc[2] based on the determined instruction type. For example, the TAP controller112determines that neither an iJTAG or a boundary scan test is to be performed from the TDI signal. Accordingly, the encoder circuitry114determines that the value of each bit of the encoded signal jtag_ir_enc[2:0] and each of the encoded signals jtag_enc[0], jtag_enc[1], and jtag_enc[2] is 0. FIG.2illustrates the decoder circuitry150. The decoder circuitry150includes TAP FSM circuitry152, decoder circuitry154, and output circuitry156. The decoder circuitry150receives the TCK signal, the TMS signal, and the encoded signals jtag_enc[0] signal, jtag_enc[1] signal, and the jtag_enc[2]. The decoder circuitry150determines and outputs the control signals210. The control signals210correspond to an instruction signal output to the testing circuitry. For example, the control signals210include a clock_dr signal, update_dr signal, capture_dr signal, shift_dr signal, reset_tap_b signal, an init_memory signal, an ac_test signal, an extest signal, an extest_smpl signal, a highz signal, and a select_dr signal. In one example, the clock_dr signal is a clock signal used for boundary scan (bscan) and iJTAG operation. The source of the clock_dr signal is the signal TCK signal. The clock_dr signal is transmitted based on the boundary or iJTAG instruction being entered. The update_dr signal, when asserted, is used to indicate that the shift chain data is ready to be copied into the destination memory locations. The capture_dr signal indicates the destination data to be copied into the shift chain, effectively performing a read operation. The shift_dr signal indicates that a shift chain is to act as a shift register and pass information the TDI pin to the TDO pin. The reset_tap_b signal indicates destination memory elements (e.g., flipflops) to be set or rest to a default value. The reset_tap_b signal corresponds to an asynchronous reset and does not require clock_dr pulse. The init_memory, ac_test, ac_mode, extest, extest_smpl, gts_usr_b signals are control signals used for a boundary scan operation. The select_dr signal indicates that the receiving memory elements (e.g. flipflops) are as part of enabled iJTAG network and act according to the shift_dr, capture_dr, update_dr signaling. The decoder circuitry154determines an instruction based on the values of the encoded signals jtag_enc[0], jtag_enc[1], and jtag_enc[2]. For example, the decoder circuitry154may include a look-up-table (LUT) or some other decoding element that is used to determine the instruction from the values of the encoded signals jtag_enc[0], jtag_enc[1], and jtag_enc[2]. The encoded signals jtag_enc[0], jtag_enc[1], and jtag_enc[2] may be represented as [N, M, 0], wherein N corresponds to the value of jtag_enc[0], M corresponds to the value of jtag_enc[1], and 0 corresponds to the value of jtag_enc[2]. Accordingly, for [0, 0, 0] the decoder circuitry154determines that the instructions correspond to a neither ijtag/bscan instruction, for [0, 0, 1] the decoder circuitry154determines that the instructions correspond to an ijtag instruction, for [0, 1, 0] the decoder circuitry154determines that the instructions correspond to an EXTEST instruction, for [0, 1, 1] the decoder circuitry154determines that the instructions correspond to an EXTEST_SMPL instruction, for [1, 0, 0] the decoder circuitry154determines that the instructions correspond to an EXTEST_PULSE instruction, for [1, 0, 1] the decoder circuitry154determines that the instructions correspond to an EXTEST_TRAIN instruction, for [1, 1, 0] the decoder circuitry154determines that the instructions correspond to a high-z instruction, and for [1, 1, 1] the decoder circuitry154determines that the instructions correspond to an block ijtag reset_tap_b instruction. In other examples, other values of the encoded signals jtag_enc[0], jtag_enc[1], and jtag_enc[2] may be used to determine other instructions. The TAP FSM circuitry152includes a data register (DR)212and determines a corresponding instruction based on the values of the TMS signal and the TCK signal. For example, the TAP FSM circuitry152includes a plurality of states of a FSM300ofFIG.3. The states of the FMS300are traversed based on the values of the TMS signal and the TCK signal. In one example, the TAP FSM circuitry152starts at state310, test logic rest. At state310, the test circuitry116or126is reset. Based on the TMS signal having a value of 1 (e.g., a high voltage value), the state310is repeated. The value of the TMS signal is determined at each cycle of the TCK signal. Based on the TMS signal transitioning from a value of 1 to a value of 0 (e.g., a low voltage value), the TAP FSM circuitry152moves from state310to state312, run-test/idle. At the state312, the test circuitry is initialized and idle mode is set. Based on the TMS signal having a value of 0, the state of the TAP FSM circuitry152stays in state312. Based on the TSM signal transitioning to a value 1, the TAP FSM circuitry152moves to the state314. At state314, a data register scan (DR-Scan) is selected. Based on the TSM signal transitioning to a value of 0, the TAP FSM circuitry152moves to state316, capture-DR. At step316, a parallel-load procedure is used to load test data in to the current data register. At state316, based on the TSM signal maintaining a value of 0, the TAP FSM circuitry152moves to state318, shift-DR. At step318, data of the testing circuitry is shifted to a TDO output or other output. At state318, based on the TMS signal maintaining a value of 0, the state318is maintained. At state316, based on the TMS signal transitioning to a value of 1, the TAP FSM circuitry152moves to state320, Exit1-DR. At state320, the selected DR is excited. Further, at state318, based on the TMS signal transitioning to a value of 1, the TAP FSM circuitry152moves to state320, Exit1-DR. At state320, based on the TMS signal transitioning to a value of 0, the TAP FSM circuitry152transitions to state322, Pause-DR. At state322, the shifting of test data within the test circuitry is paused. At state322, based on the determination that the TMS signal maintains a value of 0, the state322is maintained. At state320, based on the determination that the TMS signal transitions to a value of 1, the TAP FSM circuitry152proceeds to state326, update-DR. At state326data in the data register of the test circuitry is latched. At state322, based on the determination that the TMS signal transitions to a value of 1, the TAP FSM circuitry152proceeds to state324. At state324, based on the determination that the TMS signal maintains a value of 1, the TAP FSM circuitry152proceeds to state326. Further, at state324, based on the determination that the TMS signal transitions to value of 0, the TAP FSM circuitry152proceeds to state318. At state326, based on a determination that the TMS signal maintains a value of 1, the TAP FSM circuitry152proceeds to the state314. At state326, based on a determination that the TMS signal transitions to a value of 0, the TAP FSM circuitry152proceeds to the state312. At state314, based on the TMS signal maintaining a value of 1, the TAP FSM circuitry152proceeds to state330. Further, based on the TMS signal maintaining a value of 1, the TAP FSM circuitry152proceeds to state310. However, as the TAP FSM circuitry152does not include an instruction register, the TAP FSM circuitry152is not updated as the TAP FSM circuitry152proceeds through the states330. The state of the TAP FSM circuitry152is output to the output circuitry156. Further, the decoded instruction generated by the decoder circuitry154is output to the output circuitry156. The output circuitry156generates one or more of the control signals210based on the decoded instruction and the state of the TAP FSM circuitry152. The control signals210are output to the test circuitry. For example, the control signals are output to the boundary-scan cells and registers of the corresponding test circuitry. In one or more examples, the decoder circuitry154outputs the decoded instruction as a single output signal. For example, the decoder circuitry154outputs the signal jtag_ir_enc[2:0] from an encoded signal value of 001. The output circuitry156identifies the active instruction and combines the active instruction with the state of the TAP FSM circuitry152to drive corresponding output signals210(e.g., output signals select_dr, capture_dr, update_dr, and/or shift_dr) at the appropriate times based on the state of the TAP FSM circuitry152(e.g., capture-dr, shift-dr and update-dr). In another example, for an encoded signal value of 010, the decoder circuitry outputs the bscan extest instruction. The output circuitry156determines that the extest instruction is active, and outputs the corresponding control signals210(e.g., control signals extest, extest_smpl, init_memory, capture_dr, shift_dr and update_dr) at the appropriate times depending on the state of the TAP FSM circuitry152. FIG.4illustrates the main IC chip110. As illustrated inFIG.2, the test circuitry116includes selection controllers410aand410b,dynamic function exchange controllers (DFX) controllers412a,412b,412c,412d,adapter circuitry414a,414b, DFX controllers416a,416b,and auxiliary detect circuitry418a,418b. The selection controllers410, the DFX controllers412, the adapter circuitry414, and auxiliary detect circuitry418are connected to the communication bus420. In one example, the TAP controller112is connected to the communication bus420, and communicates the TMS signal, the TCK signal, the encoded signal jtag_enc[0], the encoded signal jtag_enc[1], and the encoded signal jtag_enc[2] via the communication bus420. The selection controller410acouples or decouples the DFX controllers412aand412b,the adapter circuitry414a,and the auxiliary detect circuitry418ato and from the communication bus420. For example, the selection controller410adetermines whether or not the DFX controllers412a,412b,the adapter circuitry414a, and the auxiliary detect circuitry418areceive the TMS signal, the TCK signal, the encoded signal jtag_enc[0], the encoded signal jtag_enc[1], and the encoded signal jtag_enc[2] from the TAP controller112. The DFX controllers412control reconfigurable designs within the main IC chip110based on bistreams. In one or more examples, the adapter circuitries414provide the DFX controllers416with additional functionality not available within the DFX controllers416. In such examples, the DFX controllers412may have more functionality than that of the DFX controllers416. The auxiliary detect circuitries418determine whether or not a corresponding auxiliary IC chip (e.g., the corresponding auxiliary IC chip120) is connected to the main IC chip110. In one example, the auxiliary detect circuitries418receive a control signal440and determines whether or not that an auxiliary IC chip is connected based on the control signal440. Based on a determination that an auxiliary IC chip is connected, the auxiliary detect circuitries output signal442that includes the TMS signal, the TCK signal, the encoded signal jtag_enc[0], the encoded signal jtag_enc[1], and the encoded signal jtag_enc[2] to the auxiliary IC chip. In one example, the auxiliary detect circuitry418areceives the control signal440a.Based on the control signal440ahaving a zero voltage level (or another predefined voltage level), the auxiliary detect circuitry418adetermines that an auxiliary IC chip is connected to the main IC chip110and outputs the signals442a.Further, the auxiliary detect circuitry418breceives the control signal440b.Based on the control signal440ahaving a zero voltage level (or another predefined voltage level), the auxiliary detect circuitry418bdetermines that an auxiliary IC chip is connected to the main IC chip110and outputs the signals442a. The signals442ainclude the TMS signal, the TCK signals, the encoded signal jtag_enc[0], the encoded signal jtag_enc[1], and the encoded signal jtag_enc[2]. Each of the signals442aare each communicated over a respective wire (e.g., wires131-135). The selection controllers410, the DFX controllers412, and the adapter circuitries414include decoder circuitry150. The decoder circuitry150receives the TMS signal, the TCK signal, the encoded signal jtag_enc[0], the encoded signal jtag_enc[1], and the encoded signal jtag_enc[2], and determines the corresponding testing instructions as is described above with regard toFIGS.2and3and described in the following with regard toFIGS.10and11. FIG.5illustrates a portion of the main IC chip110and the auxiliary IC chip120. As illustrated inFIG.5, the main IC chip110is connected to the auxiliary IC chip120via the auxiliary detect circuitry418. The auxiliary IC chip120includes a multiplexer510that receives the signals442from the auxiliary detect circuitry418of the main IC chip110via wires (e.g., the wires131-135). Further, the multiplexer510receives the output of the TAP controller122. The TAP controller122receives the TMS signal, TCK signal, TDI signals, and outputs a TDO signal. The multiplexer510selects one of the output of the TAP controller122and the signals442. In one example, the multiplexer510outputs the signals442or the output the TAP controller122to the testing circuitry126via a communication bus520. The testing circuitry126includes DFX controllers412e-412i,the adapter circuitry414c,DFX controllers416c-416h,and selection controller410c.As is described above, the DFX controllers412e-412i,the adapter circuitry414c,and the selection controller410cinclude decoder circuitry150. In one or more examples, when the IC chip120is tested on a wafer independently, the IC chip110is not present to drive the IC chip110. In such an example, a TAP controller is used (e.g., TAP controller122) to communicate test data to the testing circuitry. When wafer level test of the IC chip120has been completed, the IC chip120is integrated in a package with the IC chip110. The integrated IC chip120is then tested again as part of the package. In such an implementation, the signal442is used to test the IC chip120within the package. The use of signal442allows for the use of minimal signal count while keeping the testing interface instructions length also at a minimum. Further, when using the signal442, the IC chips are not daisy chained together within the package with the iJTAG network of TAP controllers. When daisy chaining the IC chips together, each IC chip adds a corresponding instruction register to the chain. In such an example, as the number of auxiliary IC chips120(e.g., chiplets) increases, the shift time for instructions increases, negatively impacting test time and an IC chip debug process. Accordingly, using the signal442as part of the IC chip test process reduces the test time and improves the debug process. The auxiliary IC chip120drives the chip detect signal440with a ground signal (e.g., a logic value of 0 or low voltage value). Accordingly, the auxiliary detect circuitry418determines that the IC chip120is present based on the chip detect signal440being driven with a ground signal. In one or more examples, the auxiliary IC chip120is not present (e.g., omitted). In such an example, the auxiliary detect circuitry418is driven by weak pullup circuitry within the IC chip110that drives the chip detect signal440with a logic value of 1 (e.g., or a high voltage value). FIG.6illustrates the selection controller410. The selection controller410receives the encoded signal jtag_ir_enc_fr_west[2:0]. The encoded signal jtag_ir_enc_fr_west[2:0] includes the encoded signal jtag_enc[0], the encoded signal jtag_enc[1], and the encoded signal jtag_enc[2]. Further, the selection controllers410receives the TCK signal and TMS signal. The selection controller410outputs the encoded signal jtag_ir_enc_fr_west[2:0] as the encoded signal jtag_ir_enc_to_east[2:0]. The encoded signal jtag_ir_enc_fr_west[2:0] is the same signal as the encoded signal jtag_ir_enc_to_east[2:0]. The decoder circuitry150receives the encoded signal jtag_ir_enc_fr_west[2:0], the TCK signal, and the TMS signal and generates the control signal210. The selection controller410further includes segment insertion bit (SIB) circuitry610and circuitry612. The SIB circuitry610and IR circuitry612receives the control signal210and the encoded signal jtag_ir_enc_fr_west[2:0] and outputs the control signal210and the encoded signal jtag_ir_enc_to_south[2:0]. In one example, the TAP decoder block150provides the signals that are expected by the SIB circuitry610for normal operation. The circuitry612receives the output of the SIB circuitry610and re-encodes the encoded signals (jtag_ir_enc_to_south[2:0]) which are sent to on to further SIB circuitries and the endpoints of the testing network. FIG.7illustrates the DFX controller412. The DFX controller412includes decoder circuitry150and testing network circuitry710. The decoder circuitry150outputs the control signal to the testing network circuitry710, and testing elements712. The testing network circuitry710and the testing elements712perform tests based on the control signals210. In one example,FIG.7illustrates an endpoint of a testing network. The decoder circuitry150provides the full encoded signal expansion from the encoded signal jtag_ir_enc[2:0]. The testing network circuitry710receives the control signal210. The elements of the testing network circuitry710include SIBs, test data registers (TDRs), boundary scan test instruments, and/or iJTAG test instruments. FIG.8illustrates the adapter circuitry414. The adapter circuitry414includes decoder circuitry150and SIB circuitry810. The adapter circuitry414receives the TCK signal, the TMS signal, the encoded signal jtag_ir_enc_fr_west[2:0], and outputs the encoded signal jtag_ir_enc_fr_east[2:0], the control signal210, and control signal812. The SIB circuitry810generates the control signal812based on the control signal210. FIG.9illustrates the auxiliary detect circuitry418. The auxiliary detect circuitry418includes decoder circuitry150, encoder circuitry910, repeater circuitry912, multiplexer914, demultiplexer916, repeater circuitry918, repeater circuitry920, SIB circuitry922, and multiplexer924. The auxiliary detect circuitry418receives the TCK signal, the TMS signal, and the encoded signal jtag_ir_enc_fr_west[2:0]. Further, the auxiliary detect circuitry418receives the jtag_tdi signal and the bscan_tdi signal. The jtag_tdi signal and the bscan_tdi signal may be received from other elements within the IC chip110. For example, jtag_tdi signal and the bscan_tdi signal may be received from another one of the selection controllers410, DFX controllers412, adapter circuitry414, DFX controllers416, or auxiliary detect circuitries418. The decoder circuitry150receives the TCK signal, the TMS signal, and the encoded signal jtag_ir_enc_fr_west[2:0]. The decoder circuitry150generates the control signal210. The SIB circuitry922receives the control signal210, the ijtag TDI signal, and the signal932from the demultiplexer, and generates the jtag TDO signal and the local_select_dr signal. The jtag TDO signal is output to an adjacent one of the selection controllers410, DFX controllers412, adapter circuitry414, DFX controllers416, or auxiliary detect circuitries418within the IC chip110. The encoder circuitry910receives the encoded signal jtag_ir_enc_fr_west[2:0] and the local_select_dr signal and generates the encoded signal jtag_ir_enc_fr[2:0]. The encoded signal jtag_ir_enc_fr[2:0] is output from the auxiliary detect circuitry418to the IC chip120. In one example, such a process uses a single TDI pin and a single TDO pin to communicate between IC chips. The multiplexer914receives the bscan_tdi signal and the jtag_tdi signal and selects one of the bscan_tdi signal and the jtag_tdi signal based on the control signal210. The repeater circuitry912receives the output of the multiplexer914and generates the TDO signal934. In one example, the auxiliary detect circuitry418switches between two networks. For example, the auxiliary detect circuitry418switches between a boundary scan test network, which is a long un-segmented shift chain that connects all the input and output pin drivers of the IC chip110, and an iJTAG network that is a segmented (hierarchical) testing network. To reduce signal count within the testing interface, the auxiliary detect circuitry418uses multiplexer914to drive a single TDO pin to another IC chip (e.g., IC chip120) depending on the type of instruction decoded from jtag_ir_enc[2:0]. The multiplexer914drives the repeater circuitry that aids in timing by re-generating the signal to be valid close to the rising clock edge of the TCK signal. The repeater circuitry918receives the TDI signal936from an auxiliary IC chip (e.g., the auxiliary IC chip120). In one example, the repeater circuitry918aligns the TDI signal936with a rising edge of the TCK signal. The output of the repeater circuitry918is input to the demultiplexer916. The demultiplexer916generates the signals932and933from the output of the repeater circuitry918based on the control signal210. The repeater circuitry920receives the bscan_tdi signal and outputs the signal938. The repeater circuitry920aligns the bscan_tdi signal with a rising edge of the TCK signal. The multiplexer924receives the signal938and the signal933and outputs the bscan_tdo signal. In one example, when an auxiliary IC chip is present, e.g., the IC chip120, the chip detect signal440is a value of logic 0. Accordingly, in such an example, the multiplexer924selects and outputs the signal933as the bscan_td0signal. FIG.10illustrates alignment circuitry1000, according to one or more examples. The alignment circuitry1000may be implemented with the TAP controller112and/or122. The alignment circuitry1000aligns the TMS signal with the TCK signal by centering the TMS signal with a falling edge of the TCK signal. The alignment circuitry1000includes delay circuitry1010, multiplexer1020, and multiplexer1030. The delay circuitry1010receives the TMS signal and the TCK signal from the TAP controller1002, and generates the signal1040. In one example, the delay circuitry1010is a lookup latch that opens when the TCK signal has a low value (e.g., a logic value of 0). In one example, when the TCK signal is low, the TMS signal propagates as signal1040. Accordingly, any change in the TMS signal is centered at the falling edge of the TCK signal, ensuring that timing of the TMS signal has a half-cycle of margin for setup time and a half-cycle margin for hold time. The TAP controller1002is configured similar to that of the TAP controller112or122. The multiplexer1020receives the TMS signal from the TAP controller1002and the signal1040and outputs a centered TMS signal based on the select signal recenter_tms_tdr received from the TAP controller1002. The multiplexer1030receives the TCK signal from the TAP controller and outputs the aligned TCK signal. In one or more examples, the alignment circuitry1000maintains a maximum safe setup and hold margin for the TMS signal. To maintain these operating margins, the TCK signal and TMS signal are routed similarly. If the TCK signal and the TMS signal are not routed similarly, the TCK signal will propagate faster, reducing the setup margin of the TMS signal. The multiplexer1030matches the multiplexer1020to maintain the same propagation delay for the TCK signal and the TMS signal. FIG.11illustrates a flowchart a method1100for communicating testing data, according to one or more examples. At1110of the method1100, the TAP controller112receives the TCK signal, the TMS signal, and the TDI signal. At1120of the method1100, the TDI signal is encoded by encoder circuitry114. The TDI signal is encoded into the encoded signal jtag_enc[0], the encoded signal jtag_enc[1], and the encoded signal jtag_enc[2]. At1130of the method1100, the encoded signals (e.g., the encoded signal jtag_enc[0], the encoded signal jtag_enc[1], and the encoded signal jtag_enc[2]), the TMS signal, and the TDI signal are communicated from the TAP controller112to the TAP controller122. The TAP controller112communicates the TMS signal via the wire132, the TCK signal via the wire131, and the encoded signals via wires133-135, respectively. At1140of the method1100, the TAP controller122receives the encoded signals, the TMS signal, and the TCK signal and decodes the encoded signals. In one example, the decoder circuitry154decodes the encoded signals jtag_enc[0], jtag_enc[1], and jtag_enc[2] to determine an instruction based on the values of the encoded signals jtag_enc[0], jtag_enc[1], and jtag_enc[2]. Further, the TAP FSM circuitry152determines a corresponding instruction based on the values of the TMS signal and the TCK signal. The instruction and decoded signal are used to determine a control signal. At1150of the method1100, the test circuitry126performs a test based on the control signal. The test circuitry126tests the interoperability among and/or functions of the elements of the auxiliary IC chip120. Test results are communicated from the auxiliary IC chip120to the main IC chip110. As is described above, a multiple IC chip device communicates test data from a main IC chip to an auxiliary IC chip. A portion of the test data is encoded and a portion of the test data is not encoded before it is communicated from the main IC chip to the auxiliary IC chip. The encoded test data is communicated via multiple wires connecting the main IC chip with the auxiliary IC chip. Further, the non-encoded test data is communicated via respective wires connecting the main IC chip with the auxiliary IC chip. Communicating encoded data reduces the number of wires used to connect the main IC chip with the auxiliary IC chip, reducing the cost of the corresponding device. While the foregoing is directed to specific examples, other and further examples may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.
38,029
11860229
In the figures, elements having the same designation have the same or similar function. DETAILED DESCRIPTION OF THE INVENTION Reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings. While the embodiments will be described in conjunction with the drawings, it will be understood that they are not intended to limit the embodiments. On the contrary, the embodiments are intended to cover alternatives, modifications and equivalents. Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a thorough understanding. However, it will be recognized by one of ordinary skill in the art that the embodiments may be practiced without these specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail as not to unnecessarily obscure aspects of the embodiments. NOTATION AND NOMENCLATURE SECTION Some regions of the detailed descriptions which follow are presented in terms of procedures, logic blocks, processing and other symbolic representations of operations on data bits within a computer memory. These descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. In the present application, a procedure, logic block, process, or the like, is conceived to be a self-consistent sequence of steps or instructions leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, although not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated in a computer system. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussions, it is appreciated that throughout the present invention, discussions utilizing the terms such as “testing,” “communicating,” “coupling,” “converting,” “relaying,” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices. The description below provides a discussion of computers and other devices that may include one or more modules. As used herein, the term “module” or “block” may be understood to refer to software, firmware, hardware, and/or various combinations thereof. It is noted that the blocks and modules are exemplary. The blocks or modules may be combined, integrated, separated, and/or duplicated to support various applications. Also, a function described herein as being performed at a particular module or block may be performed at one or more other modules or blocks and/or by one or more other devices instead of or in addition to the function performed at the described particular module or block. Further, the modules or blocks may be implemented across multiple devices and/or other components local or remote to one another. Additionally, the modules or blocks may be moved from one device and added to another device, and/or may be included in both devices. Any software implementations of the present invention may be tangibly embodied in one or more storage media, such as, for example, a memory device, a floppy disk, a compact disk (CD), a digital versatile disk (DVD), or other devices that may store computer code. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to limit the scope of the present invention. As used throughout this disclosure, the singular forms “a,” “an,” and “the” include plural reference unless the context clearly dictates otherwise. Thus, for example, a reference to “a module” includes a plurality of such modules, as well as a single module, and equivalents thereof known to those skilled in the art. Device Interface Board Supporting Devices with Multiple Different Standards to Interface with the Same Socket Test throughput can be usually be improved in a number of ways. One way to decrease the testing time of DUTs is by transferring functionality formerly performed in software on a general-purpose tester processor to hardware accelerators implemented on FPGA devices. Another way is by increasing the number and types of devices under test (DUTs) that can be tested under prevailing hardware and time constraints, for example, by configuring the hardware so that DUTs supporting many different interface standards, e.g., U.2, U.3, etc. can be tested with the same hardware (e.g., using the same connectors on a device interface board (DIB)) without needing to replace or reconfigure any hardware components. Embodiments of the present invention are directed to so improving test efficiency in the hardware of the automatic test equipment. FIG.2is an exemplary high level block diagram of the automatic test equipment (ATE) apparatus200in which a tester processor is connected to the devices under test (DUTs) through FPGA devices with built-in functional modules in accordance with an embodiment of the present invention. In one embodiment, ATE apparatus200may be implemented within any testing system capable of testing multiple DUTs simultaneously. For example, in one embodiment, apparatus200may be implemented inside a primitive as shown inFIG.10. Referring toFIG.2, an ATE apparatus200for testing semiconductor devices more efficiently in accordance with an embodiment of the present invention includes a system controller201, a network switch202connecting the system controller to the site module boards230A-230N, FPGA devices211A-211M comprising instantiated FPGA tester blocks210A-210N, memory block modules240A-240M wherein each of the memory blocks is connected to one of the FPGA devices211A-211M, and the devices under test (DUTs)220A-220N, wherein each device under test220A-220N is connected to one of the instantiated FPGA tester blocks210A-210N. In one embodiment, the system controller201may be a computer system, e.g., a personal computer (PC) that provides a user interface for the user of the ATE to load the test programs and run tests for the DUTs connected to the ATE200. The Verigy Stylus Operating System is one example of test software normally used during device testing. It provides the user with a graphical user interface from which to configure and control the tests. It can also comprise functionality to control the test flow, control the status of the test program, determine which test program is running, and log test results and other data related to test flow. In one embodiment, the system controller can be connected to and control as many as 512 DUTs. In one embodiment, the system controller201can be connected to the site module boards230A-230N through a network switch, such as an Ethernet switch. In other embodiments, the network switch may be compatible with a different protocol such as Fibre Channel, 802.11 or ATM, for instance. In one embodiment, each of the site module boards230A-230N may be a separate standalone board used for purposes of evaluation and development that attaches to custom-built load board fixtures, on which the DUTs220A-220N are loaded, and also to the system controller201from where the test programs are received. In other embodiments, the site module boards may be implemented as plug-in expansion cards or as daughter boards that plug into the chassis of the system controller201directly. Alternatively, the site module boards may be housed within an enclosure of a primitive (as shown inFIG.10) and may connect to the various DUTs using a device interface board (DIB). In one implementation, the site module boards230A-230N can each comprise at least one tester processor204and at least one FPGA device. The tester processor204and the FPGA devices211A-211M on the site module board run the test methods for each test case in accordance with the test program instructions received from the system controller201. In one embodiment the tester processor can be a commercially available Intel 8086 CPU or any other well-known processor. Further, the tester processor may be operating on the Ubuntu OS x64 operating system and running the Core Software, which allows it to communicate with the Stylus software running on the system controller, to run the test methods. The tester processor204controls the FPGA devices on the site module and the DUTs connected to the site module based on the test program received from the system controller. The tester processor204is connected to and can communicate with the FPGA devices over bus212. In one embodiment, tester processor204communicates with each of the FPGA devices211A-211M over a separate dedicated bus. In one embodiment, tester processor204can control the testing of the DUTs220A-220N transparently through the FPGAs with minimal processing functionality allocated to the FPGA devices. In this implementation, the FPGA devices act as pass-through devices. In this embodiment, the data traffic capacity of bus212can be exhausted rapidly because all the commands and data generated by the tester processor need to be communicated over the bus to the FPGA devices. In other embodiments, the tester processor204can share the processing load by allocating functionality to control the testing of the DUTs to the FPGA devices. In these embodiments, the traffic over bus212is reduced because the FPGA devices can generate their own commands and data. In one embodiment, each of the FPGA devices211A-211M is connected to its own dedicated memory block240A-240M. These memory blocks can, among other things, be utilized to store the test pattern data that is written out to the DUTs. In one embodiment, each of the FPGA devices can comprise two instantiated FPGA tester blocks210A-210B with functional modules for performing functions including implementation of communicative protocol engines and hardware accelerators as described further herein. Memory blocks240A-240M can each contain one or more memory modules, wherein each memory module within the memory block can be dedicated to one or more of the instantiated FPGA tester blocks210A-210B. Accordingly, each of the instantiated FPGA tester blocks210A-210B can be connected to its own dedicated memory module within memory block240A. In another embodiment, instantiated FPGA tester blocks210A and210B can share one of the memory modules within memory block240A. Further, each of the DUTs220A-220N in the system can be connected to a dedicated instantiated FPGA tester block210A-210N in a “tester per DUT” configuration, wherein each DUT gets its own tester block. This allows separate test execution for each DUT. The hardware resources in such a configuration are designed in a manner to support individual DUTs with minimal hardware sharing. This configuration also allows many DUTs to be tested in parallel, where each DUT can be connected to its own dedicated FPGA tester block and be running a different test program. In one implementation, two or more DUTs may also be connected to each FPGA tester block (e.g., block210A). The architecture of the embodiment of the present invention depicted inFIG.2has several advantages. First, it allows the communication protocol modules to be programmed directly on the instantiated FPGA tester blocks within the FPGA devices. The instantiated tester blocks can be configured to communicate with the DUTs in any protocols that the DUTs support. Accordingly, if DUTs with different protocol support need to be tested, they can be connected to the same system and the FPGAs can be reprogrammed with support for the associated protocols. As a result, one ATE body can be easily configured to test DUTs supporting many different types of protocols. In one embodiment, new protocols can be downloaded and installed directly on the FPGAs via a simple bit-stream download from a cache on system controller201without any kind of hardware interactions. An FPGA will typically include a configurable interface core (or IP core) that is programmable to provide functionality of one or more protocol based interfaces for a DUT and is programmable to interface with the DUT. For example, the FPGAs211A-211M in the ATE apparatus200will include an interface core that can be configured with the PCIe protocol to test PCIe devices initially and subsequently reconfigured via a software download to test SATA devices. Also, if a new protocol is released, the FPGAs can easily be configured with that protocol via a bit-stream download. Finally, if a non-standard protocol needs to be implemented, the FPGAs can nonetheless be configured to implement such a protocol. In another embodiment, the FPGAs211A-211M can be configured to run more than one communicative protocol, wherein these protocols also can be downloaded from system controller201and configured through software. In other words, each FPGA implements custom firmware and software images to implement the functionality of one or more PC based testers in a single chip. The required electrical signaling and protocol-based signaling is provided by on-chip IP cores in the FPGAs. As mentioned above, each FPGA is programmable with pre-verified interface or IP cores. This ensures compliance and compatibility according to a given interface standard. The programmable nature of the FPGA is utilized to optimize flexibility, cost, parallelism and upgradeability for storage testing applications from SSDs, HDDs and other protocol based storage devices. For instance, instantiated FPGA tester block210A can be configured to run the PCIe protocol while instantiated FPGA tester block210B can be configured to run the SATA protocol. This allows the tester hardware to test DUTs supporting different protocols simultaneously. FPGA211A can now be connected to test a DUT that supports both PCIe and SATA protocols. Alternatively, it can be connected to test two different DUTs, one DUT supporting the PCIe protocol and the other DUT supporting the SATA protocol, where each instantiated functional module (e.g.,210A,210B) is configured with a protocol to test the respective DUT it is connect to. In one embodiment, the interface or IP core in the FPGA may be acquired from a third party vendor but may require some customization to be compatible with the embodiments described herein. In one embodiment, the interface core provides two functions: a) wraps storage commands into a standard protocol for transmission over a physical channel; and 2) is the electrical signal generator and receiver. The other major advantage of the architecture presented inFIG.2is that it reduces processing load on the tester processor204by distributing the command and test pattern generating functionality to FPGA devices, where each DUT has a dedicated FPGA module running the test program specific to it. For instance, instantiated FPGA tester block210A is connected to DUT220A and runs test programs specific to DUT220A. The hardware resources in such a configuration are designed in a manner to support individual DUTs with minimal hardware sharing. This “tester per DUT” configuration also allows more DUTs to be tested per processor and more DUTs to be tested in parallel. Furthermore, with the FPGAs capable of generating their own commands and test patterns in certain modes, the bandwidth requirements on bus212connecting the tester processor with the other hardware components, including FPGA devices, device power supplies (DPS) and DUTs, is also reduced. As a result more DUTs can be tested simultaneously than in prior configurations. FIG.3provides a more detailed schematic block diagram of the site module and its interconnections with the system controller and the DUTs that connect to sockets on a device interface board (DIB) in accordance with an embodiment of the present invention. Referring toFIG.3, the site modules of the ATE apparatus, in one embodiment, can be mechanically configured onto tester slices340A-340N, wherein each tester slice comprises at least one site module. In certain typical embodiments, each tester slice can comprise two site modules and two device power supply boards. Tester slice340A ofFIG.3, for example, comprises site modules310A and310B and device power supply boards332A and332B. However, there is no limit to the number of device power supply boards or site modules that can be configured onto a tester slice. Tester slice340is connected to system controller301through network switch302. System controller301and network switch302perform the same function as elements201and202inFIG.2respectively. Network switch302can be connected to each of the site modules with a 32 bit wide bus. Each of the device power supply boards332A-332B can be controlled from one of the site modules310A-310B. The software running on the tester processor304can be configured to assign a device power supply to a particular site module. In one embodiment, the site modules310A-310B and the device power supplies332A-332B are configured to communicate with each other using a high speed serial protocol, e.g., Peripheral Component Interconnect Express (PCIe), Serial AT Attachment (SATA) or Serial Attached SCSI (SAS), for instance. In one embodiment, each site module is configured with two FPGAs as shown inFIG.3. Each of the FPGAs316and318in the embodiment ofFIG.3. is controlled by the tester processor304and performs a similar function to FPGAs211A-211M inFIG.2. The tester processor304can communicate with each of the FPGAs using a 8 lane high speed serial protocol interface such as PCIe as indicated by system buses330and331inFIG.3. In other embodiments, the tester processor304could also communicate with the FPGAs using different high speed serial protocols, e.g., Serial AT Attachment (SATA) or Serial Attached SCSI (SAS). FPGAs316and318are connected to memory modules308and305respectively, where the memory modules perform a similar function to memory blocks240A-240N inFIG.2. The memory modules are coupled with and can be controlled by both the FPGA devices and the tester processor304. In one embodiment, the DUTs372A-372M derive power from the device power supplies332A and332B. FPGAs316and318can be connected to the DUTs372A-372M using connector modules373A-373N on a DIB390through lanes352and354respectively. The DIB comprises connector modules373A-373N that enables the DUTs to interface with the FPGAs on the tester slices. As will be explained in connection withFIG.4andFIG.11, the connector modules may comprise sockets, lane change modules, multiplexers and other logic circuitry. In one embodiment, the connector modules enable DUTs supporting different computer interface standards (e.g., U.2 and U.3 solid-state drives) to connect with the same socket on the DIB390. In other words, the additional circuitry on the connector modules is configured so that DUTs associated with different interfaces may be able to use the same hardware to connect to the DIB390and interface with the site module boards (including the FPGA). For example, U.2 (also known as SFF-8639) is a computer interface standard for connecting solid-state drives (SSDs) to a computer (e.g., a tester system). It was designed to be used with PCIe drives along with SAS and SATA drives. It uses up to four PCIe lanes and two SATA lanes. A U.2 SSD is a high-performance data storage device designed to support the Peripheral Component Interconnect Express (PCIe) interface using a small form factor (SFF) connector that is also compatible with standard SAS and SATA-based spinning disks and solid-state drives (SSDs). The U.3 standard builds on the U.2 standard, but comprises a different pinout than the U.2 standard. For example, it combines SAS, SATA and NVMe support into a single controller. DUTs that support the U.2 and the U.3 standard may have the same form factor, but the pinouts may be quite different. In one embodiment, connector modules373A-373N enable DUTs to advantageously connect to the tester without needing to change or re-configure the hardware. In other words, both U.2 or U.3 DUTs may be plugged into the sockets on DIB390(without requiring additional hardware bus adapter cards or other modifications to the hardware). The circuitry on the connector modules373A-373N combined with the firmware and software support on the FPGAs (e.g., FPGAs316and318) allows both U.2 and U.3 DUTs to connected to the same socket on the DIB390and communicate with the tester system without hardware reconfiguration. The number of DUTs that can be connected to each FPGA is contingent on the number of transceivers in the FPGA and the number of I/O lanes required by each DUT. In one embodiment, FPGAs316and318can each comprise 32 high speed transceivers and lanes352and354can each be 32 bits wide, however, more or less can be implemented depending on the application. If each DUT requires 8 I/O lanes, for example, only 4 DUTs can be connected to each FPGA in such a system. FIG.4is a detailed schematic block diagram of an instantiated FPGA tester block ofFIG.2according to an embodiment of the present invention. Referring toFIG.4, the instantiated FPGA tester block410is connected to the tester processor499through PCIe upstream port470and to the DUT through PCIe downstream port480. Instantiated FPGA block410can comprise a protocol engine module430, a logic block module450, and a hardware accelerator block440. The hardware accelerator block440can further comprise a memory control module444, comparator module446, a packet builder module445, and an algorithmic pattern generator (APG) module443. In one embodiment, logic block module450comprises decode logic to decode the commands from the tester processor499, routing logic to route all the incoming commands and data from the tester processor499and the data generated by the FPGA devices to the appropriate modules, and arbitration logic to arbitrate between the various communication paths within instantiated FPGA tester block410. In one implementation, the communication protocol used to communicate between the tester processor499and the DUTs can advantageously be reconfigurable. The communicative protocol engine in such an implementation is programmed directly into the protocol engine module430of instantiated FPGA tester block410. The instantiated FPGA tester block410can therefore be configured to communicate with the DUTs in any protocol that the DUTs support. The pre-verified interface or IP cores mentioned above, for example, can be programmed into the protocol engine module430. This ensures compliance and compatibility according to a given interface standard. Further, the IP core allows the tester to achieve flexibility in that the IP core enables software-based changing of interfaces. Embodiments provide an ability to test multiple types of DUTs independent of the hardware. With such interface flexibility, new interfaces may be loaded into the IP core of a programmable chip. In one embodiment, an FPGA may be an SSD module-based tester that uses protocol-based communications to interface with a DUT or module. In one embodiment, the configurable interface core may be programmed to provide any standardized protocol-based communications interface. For example, in one embodiment, in the case of an SSD module-base test, the interface core may be programmed to provide standardized protocol-based communications interfaces such as SATA, SAS, etc. Accordingly, from an electrical perspective, the FPGAs utilize a configurable IP core. Enabled by software programming of the programmable chip resources of an FPGA, a given IP core may be easily reprogrammed and replaced with another IP core without physically swapping out the FPGA chip or other hardware components. For example, if a given FPGA-based tester currently supports SATA, all that would be required to be able to connect to a fibre channel DUT is for the FPGA to be reprogrammed to use a fibre channel IP core instead of the existing IP core configured for SATA. In one embodiment, the protocols can be high speed serial protocols, including but not limited to SATA, SAS or PCIe, etc. The new or modified protocols can be downloaded and installed directly on the FPGAs via a simple bit-stream download from the system controller through the tester processor. Also, if a new protocol is released, the FPGAs can easily be re-configured with that protocol via a software download. InFIG.4, if the DUT493is a PCIe device, a bit-file containing the instantiation of the PCIe protocol can be downloaded through the PCIe upstream port470and installed in the IP core on the protocol engine module430. Each FPGA device316or318can comprise one or more instantiated FPGA tester block and, consequently, one or more protocol engine modules. The number of protocol engine modules that any one FPGA device can support is limited only by the size and gate count of the FPGA. In one embodiment of the present invention, each of the protocol engine modules within a FPGA device can be configured with a different communicative protocol. Accordingly, an FPGA device can be connected to test multiple DUTs, each supporting a different communicative protocol simultaneously. Alternatively, an FPGA device can be connected to a single DUT supporting multiple protocols and test all the modules running on the device simultaneously. For example, if an FPGA is configured to run both PCIe and SATA protocols, it can be connected to test a DUT that supports both PCIe and SATA protocols. Alternatively, it can be connected to test two different DUTs, one DUT supporting the PCIe protocol and the other DUT supporting the SATA protocol. The hardware accelerator block440ofFIG.4can be used to expedite certain functions on FPGA hardware than would be possible to do in software on the tester processor. The hardware accelerator block440can supply the initial test pattern data used in testing the DUTs. It can also contain functionality to generate certain commands used to control the testing of the DUTs. To generate test pattern data, accelerator block440uses the algorithmic pattern generator module443. The hardware accelerator block440can use comparator module446to compare the data being read from the DUTs to the data that was written to the DUTs in a prior cycle. The comparator module446comprises functionality to flag a mismatch to the tester processor304to identify devices that are not in compliance. More specifically, the comparator module446can comprise an error counter that tracks the mismatches and communicates them to the tester processor304. Hardware accelerator block440can connect to a local memory module420. Memory module420performs a similar function to a memory module within any of the memory blocks240A-240M. Memory module420can be controlled by both the hardware accelerator block440and the tester processor304. The tester processor304can control the local memory module420and write the initial test pattern data to it. The memory module420stores the test pattern data to be written to the DUTs and the hardware accelerator block440accesses it to compare the data stored to the data read from the DUTs after the write cycle. The local memory module420can also be used to log failures. The memory module would store a log file with a record of all the failures the DUTs experienced during testing. In one embodiment, the accelerator block440has a dedicated local memory module block420that is not accessible by any other instantiated FPGA tester blocks. In another embodiment, the local memory module block420is shared with a hardware accelerator block in another instantiated FPGA tester block. Hardware accelerator block440can also comprise a memory control module444. The memory control module444interacts with and controls read and write access to the memory module420. Finally, hardware accelerator block440comprises a packet builder module445. The packet builder module is used by the hardware accelerator block in certain modes to construct packets to be written out to the DUTs comprising header/command data and test pattern data. In certain embodiments, hardware accelerator block440can be programmed by the tester processor304to operate in one of several modes of hardware acceleration. In bypass mode, the hardware accelerator is bypassed and commands and test data are sent by the tester processor304directly to the DUT through path472. In hardware accelerator pattern generator mode, test pattern data is generated by the APG module443while the commands are generated by the tester processor304. The test packets are transmitted to the DUT through path474. In hardware accelerator memory mode, the test pattern data is accessed from local memory module420while the commands are generated by the tester processor304. The test pattern data is transmitted to the DUT through path476. In full accelerator mode the FPGA tester block410generates both the commands and the data for testing the DUT493. In one embodiment, routing logic module482comprises the logic circuitry to swap lanes473so that DUTs supporting different interface standards (e.g., U.2, U.3, etc.) can be tested using the same hardware. The routing logic module482is implemented on the instantiated FPGA tester block410and works in conjunction with the logic circuitry on DIB connector module492(implemented on DIB493) to advantageously reroute the lanes473so that both U.2 and U.3 DUTs can be connected to the same hardware. As will be explained further in connection withFIG.11, DIB connector module492comprises multiplexers, lane change modules, and additional circuitry that enables pins on the connector associated with the connector module492to be re-routed so that both U.2 and U.3 type DUTs can be supported. In one embodiment, routing logic482may comprise a “lane swizzle” module497and a “lane mask” module498that enables the lane remapping between, for example, U.2 and U.3 type devices as will be explained further in connection withFIG.6. Further, in one embodiment, API modules494may be implemented on the tester processor499that control the switches (e.g., multiplexers, lane change modules, etc.) on the DIB connector module492and also control the firmware modules497and498implemented in the routing logic module482within the FPGA. Routing logic482is needed to arbitrate between paths472,474and476to control the flow of data to the DUT. As noted above, routing logic482can also be used to perform the mapping between the lanes473so that DUTs supporting different interface standards can be used with the connector module492. FIG.5illustrates a primitive510interfaced with a Device Interface Board (DIB)500in accordance with an embodiment of the invention. In one embodiment, primitive510may be connected to and used to test primarily SSD drives. Similar to the tester slice (e.g.,340A, etc.) shown inFIG.3, the primitive ofFIG.5is a type of discrete test module that fits into a test head and comprises the test circuitry, which performs tests on the DUTs in accordance with a test plan. A primitive comprises an enclosure550within which all the various electronics, e.g., site modules, power supplies, etc. are housed. The DIB500can connect with a plurality of DUTs520using sockets sized for the DUTs520. The DUTs connect to sockets within the DIB500to physically and electronically interface to the DIB500. Conventional DIBs typically do not contain sockets that allow swapping of DUTs supporting different interface standards. Embodiments of the present invention provide connector modules that advantageously allow DUTs supporting different interface standards (e.g., U.2 and U.3 devices) to be swapped out without reconfiguring the hardware. The primitive can also comprise an enclosure570. The DIB500can, in one embodiment, interface to a universal backplane (not shown) of the primitive510through a load board (not shown). The primitive510contains test circuitry for performing a test plan on the DUTs520. The primitive510can operate independently of any other primitive and is connected to a control server (similar to system controller301shown inFIG.3). FIG.6illustrates the manner in which a device interface board (DIB) can be configured so that devices supporting multiple different standards can interface with the same socket on the DIB in accordance with an embodiment of the invention. Specifically,FIG.6illustrates the manner in which a connector module625(which performs substantially the same function as connector module373A inFIG.3and connector module492inFIG.4) on a DIB is configured to allow both U.2 and U.3 devices to connect to a socket on the DUT and interface with the tester system seamlessly without requiring any hardware modifications. As mentioned previously, in prior ATE systems, if the pin out of the DUT was different, the interface board or load board through which communication would take place between a tester system and a DUT would need to be swapped out and replaced with a different hardware design. For example, U.3 DUTs would require a unique device interface board (DIB) as compared to U.2 DUTs even though the form factor of the DUTs is the same because U.3 DUTs differ in pinout configuration from U.2 DUTs. Embodiments of the present invention implement a connector module625on a device interface board (DIB) that supports testing both U.3 and U.2 DUTs without requiring any change in hardware. The DIB includes a universal socket620into which DUTs supporting implementing different interface standards (e.g., U.2, U.3, etc.) may be plugged. The socket620comprises pins A, B, C, D, E and F that can be re-mapped to different lanes (e.g., lanes 0, 1, 2, 3 and 4) depending on the type of device connected to the socket620. Note that lanes 0, 1, 2, 3 and 4 may be comprised within lanes473shown inFIG.4. The connector module625, in one embodiment, enables on-the-fly pin-out reconfiguration. In this fashion, the same socket on the same DIB can be advantageously reused to test different types of DUTs. Although the same socket620is used, the DUT pin-out can be re-mapped so that a single DIB can be used to test different types of DUTs, e.g., U.3 or U.2 DUT types. Therefore, two different device types can be operated on the same DIB using the same hardware. Further, the pin-out can be advantageously configured by user selection during run-time since no hardware changes are required. On the hardware side in the DIB, the hardware allows signals to be re-routed to advantageously support the different pin-outs between the different interface standard DUTs used. In one embodiment, the connector module625comprises at least a multiplexer A622, a multiplexer B624, a lane exchange module623and a connector620. In combination, the firmware logic on the FPGA (e.g., firmware logic implemented on routing logic482on FPGA tester block410inFIG.4), software implemented on the tester processor (e.g., APIs494implemented on tester processor499), the multiplexers (e.g. multiplexers622and624) and the lane exchange module623together are able to re-route lanes (e.g., lane 1, lane 2, lane 3, and lane 0) that comprise the interface between the connector620and the tester system so that both U.2 and U.3 type DUTs can be tested using the same connector. In other words, the connector module625allows lanes to be re-mapped so that signals are directed to the appropriate pins on the connector620depending on the pinout of the respective DUT plugged into the connector620. In one embodiment, the connector module625is configured to advantageously use the least amount of multiplexers to perform the mapping. Table621illustrates the manner in which the pins on connector620may be mapped based on the type of device in accordance with an embodiment of the present invention. The columns of Table621correspond to each of the pins A, B, C, D, E, and F on the connector620. The rows correspond to the interface standards that may be implemented using the connector620, e.g., U.2 single port, U.3 single port, U.2 Port A, U.2 Port B, U.3 Port A and U.3 Port B. U.2 Port A and U.2 Port B implement the dual port U.2 standard. U.3 Port A and U.3 Port B implement the dual port U.3 standard. Row631in Table621indicates the physical module (e.g., multiplexer A622, a multiplexer B624, a lane exchange module623) to which a corresponding pin is connected. Each cell in Table621indicates the manner in which the corresponding pin needs to be mapped in order to implement the corresponding standard. For example, to implement the U.2 standard, lane 1 is internally mapped to lane 3 in connection with Pin B and lane 3 is internally mapped to lane 1 in connection with Pin E. Embodiments of the present invention are able to use the mapping shown in Table621to map between the different pinouts of U.2 and U.3 type devices. FIG.7illustrates the manner in which pinouts for U.2 and U.3 devices can be mapped to each other in accordance with an embodiment of the invention. As shown inFIG.7, the four lanes (lane 0, lane 1, lane 2 and lane 3) are physically connected to the appropriate pins for either the U.2 or U.3 specification on the DIB. The pins on socket720to which the lanes (lane 0702, lane 1704, lane 2708and lane 4706) connect for a U.3 device are different from the pins on the socket721to which the same lanes (lane 0732, lane 1738, lane 2734and lane 3736) connect for a U.2 device. To support both standards, all four lanes will typically need to be remapped. In order to convert a U.3 connection into a U.2 connection, Lane 0702needs to be moved to U.2's Lane 0732pins. Further, Lane 1704needs to be moved to U.2's Lane 1738pins. Lane 2708needs to be moved to U.2's Lane 2734pins. Also, Lane 3706needs to be moved to U.2's Lane 3736pins. As seen inFIG.7, with a U.2 DIB, there are no physical connections on the connector corresponding to the pins where Lane 0 and Lane 1 connect for a U.3 device. Also, with a U.3 DIB, there are no physical connections on the connector corresponding to the pins where Lane 0 and Lane 3 connect for a U.2 device. As discussed in connection withFIG.4, the FPGA comprises command and control logic (e.g., modules497and498) configured to the test the one or more DUTs. The FPGA, in one embodiment, enables reconfiguration by rerouting signals inside the FPGA. The FPGA signal re-routing performed in firmware in combination with the DIB signal rerouting on connector module492is used for embodiments of the present invention to operate. As noted above, firmware logic is implemented on the routing logic module482ofFIG.4so that signals can be re-routed within the FPGA so that data comes out on the appropriate channels to accommodate the different pinouts between the different types of DUTs. On the software side, APIs494are implemented on tester processor499to control this functionality by switching high-speed signal and sideband signal through the FPGA firmware and the DIB. In one embodiment, routing logic482may comprise a “lane swizzle” module497and a “lane mask” module498that enables the lane remapping between, for example, U.2 and U.3 type devices. The lane swizzle module497logically rewires the tester lanes within the FPGA firmware to match the requirements on the connectors (e.g., connector620inFIG.6). The lane mask module498allows lanes that are unused for a particular pinout to be masked. As seen in Table621ofFIG.6, certain pins corresponding to dual port configurations are masked. The “lane mask” module allows the firmware to control the masking of the associated channels. In one embodiment, the APIs494implemented on tester processor499comprises an API to control the multiplexers and lane exchange modules on the connector module492. Further, the APIs494also comprises a respective API to control each of the lane swizzle module497and the lane mask module498. FIG.8depicts a flowchart of an exemplary process of testing DUTs supporting different interfaces using the same socket in a tester system according to an embodiment of the present invention. The embodiments of the invention, however, are not limited to the description provided by flowchart800. Rather, it will be apparent to persons skilled in the relevant art(s) from the teachings provided herein that other functional flows are within the scope and spirit of the present invention. At block810, a system controller is coupled to a tester processor and an FPGA. The system controller may be a Windows based operation system as discussed above. The tester processor and FPGA may be located on the same tester board or different boards. The FPGA is communicatively coupled to the tester processor and is operable to generate commands and data for testing a plurality of DUTs in accordance with one of the various acceleration modes discussed above. At block812, commands and data for testing a DUT are generated by the tester processor and/or the FPGA. At block814, signals associated with commands and data are re-routed in firmware implemented on the FPGA based on a type of the DUT (e.g., a U.2 or a U.3 type DUT) that is coupled to the FPGA. At block816, the re-routed signals are transmitted selectively over lanes corresponding to a particular set of pins on the DUT, wherein the particular set of pins receiving the selectively transmitted signals depends on the type of DUT. The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as may be suited to the particular use contemplated.
42,462
11860230
DETAILED DESCRIPTION Different embodiments will now be described more fully hereinafter with reference to the accompanying drawings, in which preferred embodiments are shown. Many different forms can be set forth and described embodiments should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope to those skilled in the art. Referring now toFIG.1, there is illustrated generally at100an electrical switchgear system in accordance with a non-limiting example that includes a front switchgear section102having first and second sets of front upper and lower switchgear housings104,106,108,110and having joined sidewalls. A rear switchgear section114includes first and second sets of rear upper and lower switchgear housings with three housings116,120,122being illustrated, and having joined sidewalls and connected to the rear of the respective front upper and lower switchgear housings104,106,108,110. Joined sidewalls of first and second sets of front upper and lower switchgear housings104,106,108,110,116,120,122include a stepped offset section to form a ventilation duct134extending the height of the switchgear system100. Each illustrated switchgear housing104,106,108,110,116,120,122includes a switchgear frame124(FIGS.2-4) that defines an interior compartment128(FIG.4). It is possible that front and rear switchgear sections102,114may include “n” sets of both front and rear upper and lower switchgear housings and form a series of switchgear housing sections forming the electrical switchgear system100. In an example, the left front upper switchgear housing104may include within the interior compartment128upper and lower compartments where each of the upper and lower compartments may include the front opening defined at the front of the switchgear housing104and a truck and drive mechanism. The front left lower switchgear housing106in this example may include a circuit breaker truck150and circuit breaker drive mechanism152such as explained below in the more detail with reference to the description ofFIG.4. The front switchgear section102upper and lower switchgear housings104,106,108,110and rear switchgear section114having the upper and illustrated lower switchgear housings116,120,122each may include one or more interior compartments128(FIGS.2,4and5) and various electrical switchgear components. On the outside of the electrical switchgear system100, and more particularly, on the outer side of the rear housings120,122as shown inFIG.1, there are shown components that make up part of a main bus extension assembly and phased shorting bus156that may extend from a main bus compartment. The rear switchgear section114may include in the various interior compartments of the illustrated switchgear housings116,120,122a main bus assembly, a ground bus assembly interconnect, a potential transformer (PT) and control power transformer (CPT) jump bus assembly, a line bus assembly, a cable compartment, various bus bars and other associated electric components. The front section upper and lower switchgear housings104,106,108,110include doors104a,106a,108a,110afor each switchgear housing to permit access into each interior compartment128. Referring now toFIGS.2-4, the switchgear system100is illustrated as having a switchgear frame124with an interior compartment128. A circuit breaker truck150carries the circuit breaker250and is supported for movement on the switchgear frame124within the interior compartment128into a contact testing position, such as illustrated at160, where electrical contact erosion may be determined within the circuit breaker. The circuit breaker250includes a breaker housing164, which in this example is formed as a vacuum interrupter270(FIG.4). As best shown inFIG.5, the breaker housing164supports a fixed electrical contact168and a movable electrical contact170are mounted within the breaker housing164. The movable electrical contact170is movable between an open and closed position relative to the fixed electrical contact168. An actuator piston174is connected to the movable electrical contact170and extends downward from the breaker housing164. A drive assembly176is coupled to the actuator piston174and configured to drive the actuator piston and move the movable electrical contact170between open and closed positions relative to the fixed electrical contact168. As best shown inFIG.3, a sensor circuit180is illustrated and includes first, second and third sensor circuits180a,180b,180c, and is mounted on the switchgear frame124under the circuit breaker truck150and aligned with the circuit breaker250when in the contact testing position160and configured to acquire displacement data of the actuator piston174when the movable electrical contact170is moved between the open and closed positions. A controller226is coupled to the sensor circuit180and configured to receive the displacement data and determine electrical contact erosion within the circuit breaker250. In an example, each sensor circuit180includes a first laser circuit182having a first laser184that is configured to emit a first optical beam as light onto a surface of the actuator piston174. The term “actuator piston” as used herein for purposes of receiving an optical signal includes those components that are directly or indirectly connected to the movable electrical contact170and operate together to drive or direct the movable electrical contact into and out of engagement with the fixed electrical contact170and may be used for determining displacement of the actuator piston. Example components may include an actuator spring188and for displacement purposes, a cylindrically configured actuator block190engaging the actuator spring as best shown in the sectional view ofFIGS.5and6. The term “actuator piston” may also include any support plates or other support members such as a transverse extending support plate192that includes a circular configured mounting member194as shown in the underside view of the circuit breaker250ofFIG.7. The transverse extending support plate192and its circular configured mounting member194engage in this example the actuator block190and operate in conjunction with the drive assembly176. As shown inFIG.7, the transverse extending support plate192and circular configured mounting member194are also connected to the threaded end196of the actuator piston174. A first optical sensor198as a detector (D1) receives the reflected light that has been emitted as the first optical beam from a reflective surface of the actuator piston174, which may be a surface such as the threaded end196of the actuator piston174or the actuator block190, or part of the transverse extending support plate192. The sensor circuit180further includes a second laser circuit200having a second laser202configured to emit a second optical beam onto a surface of the circuit breaker housing164. A second optical sensor204as a detector (D2) receives the reflected light from the surface of the breaker housing164that had been emitted as the second optical beam from the second laser202. The controller226is configured to determine actual electrical contact erosion based upon the displacement of the actuator piston174and circuit breaker housing164. During an electrical short circuit or other similar abnormal electrical condition that is detected by components of the switchgear system100, the drive assembly176coupled to the actuator piston174may aid in driving the actuator piston and move the movable electrical contact170into an open position relative to the fixed electrical contact168. During that circuit breaker interrupt, not only do the actuator piston176and associated components move, but also the circuit breaker housing164itself will move slightly in some examples at a few millimeters, e.g., 1-3 millimeters and in a decreasing damping or oscillation manner. Using the measured displacement of the actuator piston174and the circuit breaker housing164, it is possible for the controller226to determine actual movement and thus contact erosion by subtracting the displacement of the breaker housing from the displacement of the actuator piston. The controller226may also be configured to recalibrate the position of the fixed electrical contact168and movable electrical contact170based upon the displacement data obtained from movement of the actuator piston174and breaker housing164. In an example, the circuit breaker truck150may include a bottom panel208(FIG.4) having orifices210aligned with the respective first and second lasers184,202to allow the respective first and second optical beams emitted from the first and second lasers184,202to pass upward through the orifices210located in the bottom panel208to respective surfaces of the actuator piston174and breaker housing164and be reflected therefrom to determine displacement data. As shown inFIG.2, first, second and third circuit breakers250a,250b,250care carried on the circuit breaker truck150, and first, second and third sensor circuits180a,180b,180c(FIG.3) are mounted on the switchgear frame124underneath the truck and aligned with respective first, second and third circuit breakers when in the contact testing position160. As shown in the schematic diagram ofFIG.3, a sensor support bar214supports the first, second and third sensor circuits180a,180b,180c, each having first and second lasers184,202and the first and second optical sensors198,204as best shown inFIG.4showing a single sensor circuit. The first, second and third circuit breakers250a,250b,250care electrically connected in a three-phase circuit breaker configuration. The drive assembly176that is connected to the actuator piston174may be configured to open the movable electrical contact170from the fixed electrical contact168in response to an abnormal electrical condition, such as a short circuit, overcurrent, or other abnormal voltage level conditions. Electrical connectors formed in an example shown inFIG.4as primary circuit contacts220aare carried within the interior compartment128of the switchgear frame124forming the housing, and the circuit breaker250includes upper and lower terminals formed as contact arms274,276that engage the electrical connectors as the primary circuit contacts when the circuit breaker is in an electrically connected position as shown inFIG.4. It should be understood that this electrically connected position may also correspond to the contact testing position160. Of course, the contact testing position160may be other positions with the switchgear frame124and interior compartment128. The circuit breaker drive mechanism152is mounted on the switchgear frame124and connected to the circuit breaker truck150and configured to rack in the truck where the circuit breaker is in the electrically connected position as shown inFIG.4, and rack out the truck where the circuit breaker is electrically disconnected. In these examples, the circuit breaker housing164is formed as a vacuum chamber housing and the fixed and movable electrical contacts168,170are sealed within the vacuum chamber housing. As shown inFIG.4, the circuit breaker truck150is configured for linear movement in the interior compartment128. This circuit breaker truck150is supported for linear movement on the switchgear frame124, in this example, movable on spaced, parallel side rails230with a side rail shown in the view of a portion of the interior compartment128atFIG.4, illustrating the far section side rail230mounted on the interior inner side of the switchgear frame124, and on which front and rear rollers232a,232bmay be supported for translational rolling movement along the side rails230of the switchgear frame124. A side rail230may be mounted on each interior side of the switchgear frame124and positioned a few inches above any bottom floor section formed by the switchgear frame124and metal cladding. In the example shown inFIG.4, the circuit breaker drive mechanism152may be mounted on the bottom section of the switchgear frame124forming the switchgear housing and connected to the truck150, and configured to rack the truck and the circuit breaker250it carries into a first connected position where the primary circuits220and secondary control or test circuits222are electrically connected (FIG.4), a circuit breaker test position where primary circuits are electrically disconnected and the secondary circuits are connected and a fully disconnected position where both primary and secondary circuits are disconnected. The circuit breaker drive mechanism152may be configured to rack out the truck150and the circuit breaker250into a second circuit breaker test position where the primary circuit220is electrically disconnected and the secondary circuit222is connected to the secondary control or test circuits. The electrically connected position as described may also correspond to the electrical contact testing position160. However, other locations may be used for the contact testing position160. Secondary connectors as part of the secondary circuit222may include a cable or other secondary connection to connect and complete the secondary circuit for testing and/or control. The drive mechanism152may also be configured to rack out the truck150into a third disconnected position where the primary and secondary circuits220,222are electrically disconnected. Further details of an example of the circuit breaker drive mechanism152and other components are disclosed in U.S. patent application Ser. No. 17/422,540, filed Jul. 13, 2021, the disclosure which is hereby incorporated by reference in its entirety. The circuit breaker250as illustrated inFIG.2is a three-phase circuit breaker and includes the first, second and third circuit breakers250a,250b,250ceach formed as a vacuum interrupter270(FIG.4) and defines the three poles272for the three-phase circuit as first, second and third single-phase circuits with the upper portion of the poles each having its contact arm274that connects to a bus bar circuit, for example, as part of an input as a power supply and the primary circuit and the lower portion of the poles each having its contact arm276having connectors to connect to a cable assembly or other electrical circuit as part of the output and connected to a load. Although only one vacuum interrupter270and one pole272is illustrated inFIG.4, there are three vacuum interrupters270(FIG.2) and associated poles across the width of the circuit breaker truck150. Each vacuum interrupter270and pole272includes its upper contact arm274and lower contact arm276and includes connectors that may include a contact finger assembly shown generally at280inFIG.2, which are received into primary circuit bushings282(FIG.4) that are formed as a primary circuit housing to hold fixed primary circuit contacts220aas shown in the dashed lines, and which engage the contact finger assemblies280. The contact arms274,276may carry the contact finger assemblies280(FIG.2) formed as tulip contacts in different configurations. Each vacuum interrupter270operates as a switch and incorporates its movable electrical contact170and its fixed electrical contact168in a vacuum as part of the breaker housing164, in this example, formed as a vacuum chamber housing. The separation of the electrical contacts168,170, such as during a short circuit or other abnormal electrical condition, or even for electrical contact testing, results in a metal vapor arc, which is quickly extinguished. This medium-voltage switchgear system100includes the medium-voltage, three-phase vacuum circuit breaker250having the three vacuum-interrupters270. Each vacuum interrupter270may provide the fixed electrical contact168and movable electrical contact170in a flexible bellows to allow movement of the movable electrical contact in a hermetically-sealed ceramic with a high vacuum. The bellows may be made of stainless steel. Vacuum interrupters may have a very long Mean Time to Failure (MTTF), and include high technology ceramic housings that impart a vacuum tightness with a resolution to the range of 10−7hPa. The three-phase vacuum circuit breaker250as illustrated may operate with protective relays and other sensors to detect overcurrent or other abnormal or unacceptable conditions and signal the circuit breaker to switch open. To maintain heat control in the circuit breaker250, each pole272may include an insulator284as illustrated inFIG.4. Protective relays and sensors may be formed as current transformers and potential transformers and temperature or pressure instruments and other sensing devices that may operate in the electrical switchgear environment. The vacuum interrupters270may operate at 5 KV, 15 KV, 27 KV, and 37 KV corresponding to the normal operating range of medium-voltage switchgear systems100. Referring now toFIG.6, there is illustrated a testing system290for the circuit breaker250allowing the erosion contact test to be conducted while the truck150carrying the circuit breaker is removed from the switchgear frame124and housing and placed on a test platform illustrated generally at292. In this example, the test platform292may be a rectangular or other geometrically shaped support platform that supports the truck150carrying the circuit breaker250in a contact testing position294on the test platform. In this example, the test platform292includes wheel chocks296or indentations formed in the test platform that position the truck150properly in the contact testing position294on the test platform. The sensor circuit180has a configuration similar to that shown inFIG.3and is mounted on the test platform292such as in a depression or cut-out297and positioned such that when the truck150rests on the test platform and the wheels engaged in the wheel chocks296, the sensor circuit180is aligned with the proper circuit breaker250in the contact testing position296. The sensor circuit180operates similar to the sensor circuit described relative toFIGS.2-5and acquires displacement data of the actuator piston174and breaker housing164when the movable electrical contact170is moved between the open and closed positions. The test platform292includes three sensor circuits for three circuit breakers with each sensor circuit180having a first laser circuit182having the first laser184(L1) and first optical sensor198(D1) and second laser circuit200having the second laser202(L2) and second optical sensor204(D2) as described also with the sensor circuit180ofFIGS.2-5. In the example ofFIGS.2and3, a portion of the switchgear frame124is illustrated, but that section of the switchgear frame could correspond to a separate test platform292on which the truck150and mounted circuit breaker250will rest on after the truck is removed from the switchgear housing and placed on the test platform292. Referring again toFIG.7, there is illustrated the underside of the circuit breaker250such as when positioned within a switchgear housing or on the test platform292. This view shows various components as described before and shows relative positions of different surfaces on which the first and second optical beams may be emitted to different portions of the surfaces and reflected therefrom. The first references labeled184aindicate possible surface locations on which the first optical beam from the first laser184may be directed and references labeled202acorrespond to possible surface locations in which the second optical beam from the second laser202may be directed. Referring now toFIG.8, there is illustrated generally at300a high-level flowchart showing a method of operating a switchgear system100for determining the electrical contact erosion of the electrical contacts168,170within the circuit breaker250. The process starts (Block302) and a truck150carrying a circuit breaker250is positioned into a circuit testing position160within the switchgear interior compartment128(Block304). A contact erosion test is instituted by moving the movable electrical contact170between open and closed positions (Block306). Displacement data is acquired at the actuator piston174and the breaker housing164from the first and second optical beams emitted from first and second laser circuits182,200(Block308). The electrical contact erosion is determined within the circuit breaker250from the displacement data (Block310). The process ends (Block312). Referring now toFIG.9, there is illustrated a high-level flowchart generally at350and showing a method of operating the testing system290for the circuit breaker250. The process starts (Block352) and the truck150carrying the circuit breaker250is removed from the switchgear interior compartment128, and the truck is positioned on the test platform292in the circuit testing position160(Block354). The contact erosion test is instituted by moving the movable electrical contact168between open and closed positions (Block356). Displacement data is acquired from the actuator piston174and the breaker housing164via the first and second laser circuits182,200(Block358). The electrical contact erosion is determined within the circuit breaker250from the displacement data (Block360). The process ends (Block362). In an example, the actuator piston174may be connected to the drive assembly176and include a stored energy mechanism that may include the actuator spring188and the actuator block190. The actuator piston174and drive assembly176may include different stroke adjusters, lever shafts, and link rods that work in conjunction with the actuator spring188and actuator block190. The actuator piston174and drive assembly176may include one or more magnetic actuators and a manual opening mechanism. A servomechanism or electromagnetic system may be used to compress the actuator spring188for stored energy. It should also be understood that instead of an optical beam, it is possible to use an acoustic signal. The first and second optical sensors198,204may receive reflected light. In an example, they may operate using a position sensing device (PSD), charged coupled device (CCD), or CMOS devices. Other non-contact sensors may be used. It is possible for the switchgear system100as described to obtain signal data during each arcing event for “real-time” data collection associated with the contact erosion status and/or expected service life remaining on the contacts of each circuit breaker. It is possible to provide dynamic evaluations and update in real-time the data to allow preventive maintenance scheduling and service without disengaging the circuit breaker250from an electrically connected position. As noted before, it is also possible to use an acoustic emitter and acoustic sensor instead of an optical laser and sensor or detector. It is also possible to use ultrasound sources and detectors. The controller226may trigger the first and second laser circuits182,200and obtain signals corresponding to reflected light beams at successive intervals, such as in response to a trigger signal from a start of the movable electrical contact closing into a closed position, and a trigger signal from a start of opening of the movable electrical contact into an open position. These intervals can range from 50 microseconds to as much as 1 millisecond and values in between. In an example, the sensor circuits180can be movable along the sensor support bar214to allow adjustment at the contact testing position160when employed in the switchgear system100or along the testing position294on the test platform292. Different adjustment mechanisms could be used such us slidable members on the first and second laser circuits received in grooves or slots of the sensor support laser214. If an acoustic emitter and sensor are used, the distance may be calculated by measuring the time required for ultrasonic waves to be sent and received based upon the speed of sound. An optical beam or acoustic waves may be emitted in a pulsed manner where displacement data and time may be translated to velocity with the slope of the distance versus a time curve. Different power sources for the first and second lasers and any optical sensors may be incorporated within the system100. Also, the different surfaces on which the optical beam may be directed and reflected may include a reflective coating, film or other adhesively attached reflective strips or patches that help in reflectivity and directing the optical beam or acoustic signal or other ultrasonic signal back to the respective optical sensor or other detector, such as shown in the reflective patch184binFIG.7. Any optical beam may be scanned and the time may be measured using laser scanning techniques. Data acquisition intervals can vary from as little as 20 microseconds up to 3 microseconds with possible intermediate values. Travel curves can be provided from the displacement data. It is possible to use cloud computing as part of the controller226or a large network control center when there are many different circuit breakers and different switchgear frames and housings. This application is related to copending patent application entitled, “SWITCHGEAR SYSTEM THAT DETERMINES CONTACT EROSION IN CIRCUIT BREAKER,” which is filed on the same date and by the same assignee and inventors, the disclosure which is hereby incorporated by reference. Many modifications and other embodiments of the invention will come to the mind of one skilled in the art having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is understood that the invention is not to be limited to the specific embodiments disclosed, and that modifications and embodiments are intended to be included within the scope of the appended claims.
25,745
11860231
DESCRIPTION OF EMBODIMENTS Hereinafter, an embodiment of the present disclosure is described in detail with reference to the drawings. (Schematic Configuration of Relay State Determination System100) FIG.1shows an overall configuration of a relay state determination system100. As an example, the relay state determination system100determines whether or not a relay4has deteriorated. Here, “deterioration” of the relay4means that opening and closing of the relay4may not be normally operated. In other words, failure may have occurred. This is a state of the relay4. As shown inFIG.1, the relay state determination system100includes the relay4, voltmeters5and8, and a relay state determination device10. The relay state determination system100further includes a direct current (DC) power supply1, a switch device2, a diode3, a shunt resistor9, an alternating current (AC) power supply6, and a load7. As shown inFIG.1, the relay4is arranged across a primary-side circuit and a secondary-side circuit. The relay4includes an operation coil4aon a primary side and a switch4bon a secondary side. In addition, the switch4bon the secondary side has a pair of contacts (a first contact4b1and a second contact4b2) in this example. The pair of contacts4b1and4b2are opened and closed by turning on and off the energization to the operation coil4aon the primary side. As shown inFIG.1, in the primary-side circuit, a positive electrode terminal1pof the DC power supply1is connected to a one end2aof the switch device2. An other end2bof the switch device2is connected to a cathode terminal3kof the diode3. The other end2bof the switch device2is connected to a one end4a1of the operation coil4a. A negative electrode terminal1mof the DC power supply1is connected to an anode terminal3aof the diode3. The negative electrode terminal1mof the DC power supply1is connected to a one end9aof the shunt resistor9. An other end9bof the shunt resistor is connected to an other end4a2of the operation coil4a. The voltmeter8is connected in parallel to the shunt resistor9. As a result, the voltmeter8can measure a voltage value between the two ends9aand9bof the shunt resistor9. As shown inFIG.1, in the secondary-side circuit, the first contact4b1of the switch4bis connected to a one end6aof the AC power supply6being a load power supply. An other end6bof the AC power supply6is connected to a one end7aof the load7. An other end7bof the load7is connected to the second contact4b2of the switch4b. The voltmeter5is connected in parallel to the switch4b. Thus, the voltmeter5can measure a voltage value between the pair of contacts4b1and4b2of the relay4. The relay state determination device10is arranged separately from the primary-side circuit and the secondary-side circuit described above. As shown inFIG.1, the relay state determination device10is communicably connected to the voltmeters5and8. The connection between the relay state determination device10and the voltmeters5and8may be wired or wireless. With this configuration, the relay state determination device10can receive voltage values, which are measurement results of the voltmeters5and8, from the voltmeters5and8. The relay state determination device10is communicably connected to the switch device2. The connection between the relay state determination device10and the switch device2may be wired or wireless. With this configuration, the switch device2can notify the relay state determination device10of the off timing of the switch device2. The DC power supply1supplies a direct current to the operation coil4ain the relay4. In this example, the switch device2is composed of a field effect transistor (FET), and is switched from an ON state to an OFF state or from the OFF state to the ON state according to a switch control signal from the outside (not illustrated). The switch device2transmits a signal indicating the switching timing to the relay state determination device10. The switch device2may be composed of a semiconductor switch other than the FET or a mechanical switch. The diode3is arranged to protect the circuit from a counter electromotive voltage generated by the operation coil4abeing an inductive load. As described above, in the relay4, the pair of contacts4b1and4b2on the secondary side is opened and closed by turning on and off the energization to the operation coil4aon the primary side. More specifically, when the switch device2is switched on, the energization from the DC power supply1to the operation coil4ais turned on. When the operation coil4ais energized, the relay4(more specifically, the switch4b) is closed. On the other hand, when the switch device2is switched off, the energization from the DC power supply1to the operation coil4ais turned off. When the operation coil4ais de-energized, the relay4(more specifically, the switch4b) is opened. The switch4bin the relay4has the first contact4b1and the second contact4b2. The voltmeter5measures the voltage value between the first contact4b1and the second contact4b2. The voltmeter5transmits the measured voltage value as a signal to the relay state determination device10. The AC power supply6supplies AC power to the load7. Then, the load7consumes the supplied AC power and performs a predetermined operation. (Operation of Relay) FIG.4Aillustrates a state in which the switch device2is turned on and the switch4bof the relay4is “closed”. In this state, an armature4A of the switch4bis displaced relative to the operation coil4aby electromagnetic force E1generated by the operation coil4a. Specifically, the armature4A rotates in a direction indicated by an arrow X1around a supporting point SP against tensile force F1of a coil spring41, and brings the first contact4b1into contact with the second contact4b2while bending by a certain push-in amount Bx. FIG.4Billustrates a state in which the switch device2is turned off and the switch4bof the relay4is “opened”. In this state, the electromagnetic force E1generated by the operation coil4adecreases, and as a result, the armature4A of the switch4brotates in a direction indicated by an arrow X2around the supporting point SP by tensile force F2of the coil spring41. As a result, the pressure pressing the first contact4b1changes from a certain value to zero, and the first contact4b1is separated from the second contact4b2. In this case, a counter electromotive voltage is generated in the operation coil4a, and current flowing through the operation coil4aflows back through the diode3. (Schematic Configuration of Relay State Determination Device10) Next, a configuration of the relay state determination device10is described.FIG.2illustrates a schematic configuration of the relay state determination device10. In the present embodiment, the relay state determination device10determines whether the relay4described above has deteriorated. As illustrated inFIG.2, the relay state determination device10includes a signal reception unit21, a voltage value acquisition unit22, a display unit23, an operation unit24, a memory25, a threshold value storage unit26, a notification unit27, and a processor28. In the relay state determination device10, the processor28is communicably connected to the signal reception unit21, the voltage value acquisition unit22, the display unit23, the operation unit24, the memory25, the threshold value storage unit26, and the notification unit27. With this configuration, the processor28controls the signal reception unit21, the voltage value acquisition unit22, the display unit23, the operation unit24, the memory25, the threshold value storage unit26, and the notification unit27, and the respective units21,22,23,24,25,26, and27perform predetermined operations by the control. The signal reception unit21transmits and receives a signal or data to and from an external terminal. For example, the signal reception unit21according to the present embodiment is communicably connected to the switch device2. Therefore, the signal reception unit21receives, from the switch device2, data indicating the timing at which the switch device2is turned to the OFF state, and the like. The voltage value acquisition unit22transmits and receives a signal or data to and from the external terminal. For example, the voltage value acquisition unit22according to the present embodiment is communicably connected to the voltmeters5and8. Therefore, the voltage value acquisition unit22receives (acquires) signals indicating the voltage values measured by the voltmeters5and8from the voltmeters5and8. The display unit23is a monitor that displays various images. The display unit23can visually display results of various types of analysis and the like performed in the processor28. In addition, the display unit23can also visibly display predetermined information in response to an instruction from the user via the operation unit24. For example, the display unit23may visibly display the information (data) stored in the memory25and the threshold value storage unit26. Furthermore, the display unit23may visibly display a predetermined notification and the like. For example, a liquid crystal monitor or the like can be adopted as the display unit23. The operation unit (which can be understood as a threshold value input unit)24is a portion that receives a predetermined operation (instruction) from the user. For example, the operation unit24is constituted of a mouse, a keyboard, and others. Note that in the case in which a touch panel monitor is employed as the display unit23, the display unit23has not only a display function but also a function as the operation unit24. The memory25stores various types of data. The memory25includes a random access memory (RAM), a read only memory (ROM), and others. For example, various programs used for such as the operation of the processor28are changeably stored in the memory25. In addition, the memory25stores data (data indicating the switching timing) from the switch device2acquired by the signal reception unit21, voltage value data from the voltmeters5and8acquired by the voltage value acquisition unit22, and so on. The memory25may erase various types of stored data after a preset predetermined time period elapses after the storage. The threshold value storage unit26stores a threshold value Th for determining whether or not the relay4has deteriorated. Here, the threshold value Th is determined (set) by the user based on such as an empirical rule. The threshold value Th stored in the threshold value storage unit26can be changed. For example, the operation unit24functions as the threshold value input unit for variably inputting the threshold value Th. The user inputs a desired threshold value Th to the operation unit24. By this, the threshold value Th is stored (set) in the threshold value storage unit26. Note that, in the case in which a threshold value Th′ is already stored in the threshold value storage unit26, the threshold value Th′ is changed to the threshold value Th corresponding to the operation by the operation from the user via the operation unit24. Note that the threshold value storage unit26may have a predetermined threshold value Th as a default. The notification unit27notifies that the relay4has deteriorated based on an analysis result of the processor28described later. For example, in the case in which the notification unit27includes such as a speaker, the notification unit27outputs a predetermined sound. Furthermore, for example, in the case in which the notification unit27includes a member that outputs predetermined light, the notification unit27outputs the predetermined light. The display unit23can have the function of the notification unit27, and in this case, predetermined information (information indicating deterioration of the relay4) is displayed on the display unit23in a visually recognizable manner. The processor28includes a central processing unit (CPU) in this example. For example, the processor28reads each program and each piece of data stored in the memory25. In addition, the processor28controls each of the units21to27according to the read program to execute a predetermined operation (function). In addition, the processor28performs predetermined calculation, analysis, processing, and others in the processor28(blocks28aand28bconfigured by programs) according to the read program. Note that some or all of the functions executed by the processor28may be configured as hardware by one or a plurality of integrated circuits or the like. As illustrated inFIG.2, the processor28according to the present embodiment includes, as functional blocks, an RUS calculation unit28aand a state determination unit28bprogrammed to realize predetermined operations. Note that the operation of each of the blocks28aand28bis described in detail in the description of the operation to be described later. (Operation of Relay State Determination System100) Next, the operation of the relay state determination system100to determine whether or not the relay4has deteriorated is described with reference to a flowchart shown inFIG.3. Referring toFIG.3, it is assumed that the switch device2is switched from the ON state to the OFF state (step S1). The switch device2notifies the relay state determination device10of the switching. The signal reception unit21of the relay state determination device10receives the notification. Next, the voltmeter8measures a voltage between the two ends9aand9bof the shunt resistor9(step S2). The voltmeter8transmits a voltage value Va as the measurement result to the relay state determination device10, and the voltage value acquisition unit22of the relay state determination device10receives the voltage value Va. The memory25stores the voltage value Va received by the voltage value acquisition unit22. At this time, the voltmeter8is measuring every moment the voltage between the two ends9aand9bof the shunt resistor9. Note thatFIG.5illustrates time changes of the voltage value Va between the two ends9aand9bof the shunt resistor9and a voltage value Vb between the first contact4b1and the second contact4b2in the switch4b, after the turn-off instruction to the relay4(after step S1inFIG.3). The vertical axis inFIG.5represents the voltage value (V), and the horizontal axis inFIG.5represents time (ms). Next, the RUS calculation unit28aof the relay state determination device10acquires a voltage value V1of when the voltage value Va decreases and becomes minimum by the armature4A starting to be displaced in the direction indicated by the arrow X2inFIG.4B(step S3inFIG.3). Next, the voltmeter5measures the voltage between the first contact4b1and the second contact4b2in the switch4b. Then, when the switch4bis opened (as indicated by a dotted circle D inFIG.5, which indicates the case in which the voltage value Vb suddenly decreases), the RUS calculation unit28aof the relay state determination device10acquires the voltage value Va measured by the voltmeter8as a second voltage value V2(step S4inFIG.3). Next, the RUS calculation unit28aas the voltage value difference calculation unit of the relay state determination device10calculates a voltage value difference VD between the voltage value V1and the voltage value V2(step S5). Next, the RUS calculation unit28aof the relay state determination device10calculates the RUS by dividing the voltage value difference VD by a resistance value of the shunt resistor9(step S6). Next, the RUS calculation unit28atransmits the RUS to the state determination unit28b. The state determination unit28breads out the threshold value Th stored in the threshold value storage unit26. Then, the state determination unit28bcompares the RUS with the threshold value Th to determine whether or not the relay4has deteriorated (step S7). Note that, as can be seen from the above, the RUS used in the comparison processing in step S7is the RUS obtained in step S6. Further, the threshold value used in the comparison processing in step S7is the threshold value Th preset in the threshold value storage unit26of the relay state determination device10. Still further, the threshold value Th is set by the user on the basis of an empirical rule or the like.FIG.6illustrates a relationship between the RUS and the number of opening and closing times of the contacts4b1and4b2. The vertical axis inFIG.6is the RUS (μA), and the horizontal axis inFIG.6is the number of opening and closing times of the contacts4b1and4b2. As illustrated inFIG.6, generally, as the number of opening and closing times of the relay4increases, the RUS gradually decreases. The user sets the threshold value Th on the basis of an empirical rule in consideration of the measurement result of the RUS illustrated inFIG.6, the usage status of the relay4, the expected time point of the failure of the relay4(the time point at which the opening and closing of the relay4is expected not to operate normally), and the like. In the example ofFIG.6, the threshold value Th is set to 100 μA. That is, in the example ofFIG.6, the user considers the above factors, and determines that the relay4has deteriorated when the RUS of the relay4to be used falls below 100 μA. In step S7ofFIG.3, specifically, the state determination unit28bdetermines whether or not the RUS has fallen below the threshold value Th. It is assumed that the state determination unit28bdetermines that the RUS is equal to the threshold value Th or more (“NO” in step S7). In this case, as illustrated inFIG.3, the relay state determination processing ends. On the other hand, it is assumed that the state determination unit28bdetermines that the RUS has fallen below the threshold value Th (“YES” in step S7). In this case, the state determination unit28bcontrols the notification unit27, and the notification unit27notifies that the relay4has deteriorated (step S8). Thereafter, the relay state determination processing ends. (Effects) As described in the prior art, if the relay is the multipole type, there is a problem that the deterioration of each individual pole is difficult to be determined. On the other hand, in the present embodiment, the RUS is calculated for each switch4b, that is, for each pole. Then, it is determined whether or not the RUS has fallen below the threshold value. As a result, the deterioration of each individual pole in the relay can be determined. FIG.6shows a certain test data exemplifying the relationship between the RUS of the relay and the number of opening and closing times of the relay. Samples #1 to #5 shown inFIG.6are relays of the same type (model number) and have been tested under the same conditions. As can be seen from the experimental example ofFIG.6, even in the relay of the same type (model number), the individual difference between the samples #1 to #5 is large regarding such as the RUS. In consideration to this, in the present embodiment, the RUS calculation unit28acalculates the RUS for the relay4. Then, the state determination unit28bcompares the RUS with the threshold value Th to determine whether the relay4has deteriorated. Here, it is empirically known that the individual difference of the relay4is small for the RUS when the relay4has deteriorated. Therefore, according to the present embodiment, it is possible to accurately determine whether or not the relay4has deteriorated. In addition, in the present embodiment, when it is determined that the RUS has fallen below the threshold value Th (“YES” in step S7), the notification unit27notifies that the relay4has deteriorated. Therefore, the user can quickly take measures such as replacing the relay4. Software (computer program) for causing a computer to execute the relay state determination method (FIG.3) may be recorded in a recording medium that can store data in a non-transitory manner, such as a compact disc (CD), a digital versatile disc (DVD), or a flash memory. By installing the software recorded on the recording medium in an actual computer device such as a personal computer, a personal digital assistant (PDA), or a smartphone, the computer can be caused to execute the relay state determination method described above. In addition, in the above-described embodiment, the processor28includes a CPU, but the present invention is not limited thereto. The processor28may include a logic circuit (integrated circuit) such as a programmable logic device (PLD) or a field programmable gate array (FPGA). As described above, a relay state determination device of the present disclosure that determines whether or not a relay has deteriorated,the relay includes:a primary-side switch, an operation coil, and a shunt resistor that are connected in series to a primary-side power supply;a diode connected in parallel to a series connection of the operation coil and the shunt resistor in a direction in which current due to a counter electromotive force of the operation coil flows to the shunt resistor when the primary-side switch is turned off; andan armature that opens and closes at least a pair of secondary-side contacts in response to on and off of the primary-side switch, the armature being configured to be displaced relative to the operation coil by an electromagnetic force generated by the operation coil when the primary-side switch is turned on and to bring one of the secondary-side contacts into contact with an other of the secondary-side contacts while being deflected by a certain push-in amount,the relay state determination device comprises:a voltage value acquisition unit that measures every moment a detected voltage detected from two ends of the shunt resistor;a voltage value difference calculation unit that calculates a voltage value difference between a first voltage value of when the detected voltage becomes minimum by the armature starting displacement after the primary-side switch is turned off and a second voltage value of when the secondary-side contacts are opened; anda state determination unit that determines that the relay has deteriorated when the voltage value difference falls below a predetermined threshold value. In the relay state determination device of the present disclosure, the voltage value acquisition unit measures every moment the detected voltage detected from two ends of the shunt resistor. The voltage value difference calculation unit calculates the voltage value difference between the first voltage value of when the detected voltage becomes minimum by the armature starting displacement after the primary-side switch is turned off and the second voltage value of when the secondary-side contacts are opened. The state determination unit determines that the relay has deteriorated when the voltage value difference falls below the predetermined threshold value. In this way, the deterioration for each secondary-side contact, that is, for each pole of the relay, can be determined. Therefore, the deterioration of each individual pole in the relay can be determined. In the relay determination device of one embodiment, the state determination unit determines that the relay has deteriorated when a current value obtained by dividing the voltage value difference by a value of the shunt resistor falls below a predetermined threshold value. In the present description, the phrase “the current value obtained by dividing the voltage value difference by the value of the shunt resistor, the voltage value difference being a value between the first voltage value of when the detected voltage becomes minimum by the armature starting displacement after the primary-side switch is turned off and the second voltage value of when the secondary-side contacts are opened” (this is referred to as “Reset Undershoot (RUS)), is a value of current flowing through the operation coil in a period until the pressure pressing the secondary-side movable contact changes from a certain value to zero. This corresponds to the push-in amount of the armature, and the decrease in the push-in amount indicates that the armature is deteriorated. In the relay state determination device of the one embodiment, the state determination unit determines that the relay has deteriorated when the current value obtained by dividing the voltage value difference by the value of the shunt resistor falls below a predetermined threshold value. Therefore, the deterioration of the relay can be determined based on the current value (that is, the RUS). In the relay state determination device of one embodiment, the relay state determination device further comprises a notification unit that notifies that the relay has deteriorated when it is determined that the voltage value difference or the current value obtained by dividing the voltage value difference by the value of the shunt resistor is less than a predetermined threshold value. In the relay state determination device of the one embodiment, a user can recognize that the relay has deteriorated by receiving the notification. Therefore, the user can quickly take measures such as replacing the relay. In another aspect, a relay state determination system of the present disclosure comprises:a relay that includes a primary-side switch, an operation coil, and a shunt resistor that are connected in series to a primary-side power supply,a diode connected in parallel to a series connection of the operation coil and the shunt resistor in a direction in which current due to a counter electromotive force of the operation coil flows to the shunt resistor when the primary-side switch is turned off, andan armature that opens and closes at least a pair of secondary-side contacts in response to on and off of the primary-side switch, the armature being configured to be displaced relative to the operation coil by an electromagnetic force generated by the operation coil when the primary-side switch is turned on and to bring one of the secondary-side contacts into contact with an other of the secondary-side contacts while being deflected by a certain push-in amount;a first voltmeter that measures a detected voltage detected from two ends of the shunt resistor;a second voltmeter that measures a voltage between the pair of secondary-side contacts of the relay; anda relay state determination device communicably connected to the first voltmeter and the second voltmeter,wherein the relay state determination device includes:a voltage value acquisition unit that measures every moment a detected voltage detected from the two ends of the shunt resistor;a voltage value difference calculation unit that calculates a voltage value difference between a first voltage value of when the detected voltage becomes minimum by the armature starting displacement after the primary-side switch is turned off and a second voltage value of when the secondary-side contacts are opened; anda state determination unit that determines that the relay has deteriorated when the voltage value difference falls below a predetermined threshold value. In the relay state determination system of the present disclosure, the deterioration of each individual pole in the relay can be determined. In another aspect, a relay state determination method of the present disclosure that determines whether or not deterioration has occurred in a relay includes:a primary-side switch, an operation coil, and a shunt resistor that are connected in series to a primary-side power supply;a diode connected in parallel to a series connection of the operation coil and the shunt resistor in a direction in which current due to a counter electromotive force of the operation coil flows to the shunt resistor when the primary-side switch is turned off; andan armature that opens and closes at least a pair of secondary-side contacts in response to on and off of the primary-side switch, the armature being configured to be displaced relative to the operation coil by an electromagnetic force generated by the operation coil when the primary-side switch is turned on and to bring one of the secondary-side contacts into contact with an other of the secondary-side contacts while being deflected by a certain push-in amount,the relay state determination method comprises:measuring every moment a detected voltage detected from two ends of the shunt resistor;calculating a voltage value difference between a first voltage value of when the detected voltage becomes minimum by the armature starting displacement after the primary-side switch is turned off and a second voltage value of when the secondary-side contacts are opened; anddetermining that the relay has deteriorated when the voltage value difference falls below a predetermined threshold value. In the relay state determination method of the present disclosure, the deterioration of each individual pole in the relay can be determined. In yet another aspect, a non-transitory computer readable medium of the present disclosure is the non-transitory computer readable medium configured to cause a computer to execute the relay state determination method. The relay state determination method can be performed by causing a computer to execute the non-transitory computer readable medium of the present disclosure. The above embodiments are illustrative, and various modifications can be made without departing from the scope of the present invention. It is to be noted that the various embodiments described above can be appreciated individually within each embodiment, but the embodiments can be combined together. It is also to be noted that the various features in different embodiments can be appreciated individually by its own, but the features in different embodiments can be combined.
29,595
11860232
DETAILED DESCRIPTION A battery monitoring module, according to the present disclosure, including, for example, an electric-vehicle (EV) battery monitoring module, may be arranged in a way to receive a sensor signal from a light sensor configured to detect light within a battery module. Battery modules are often configured to be completely enclosed in order to protect battery cells and electronics contained within the module from environmental factors such as temperature extremes and debris. However, battery enclosures pose a challenge for monitoring the battery conditions (e.g., current, voltage, temperature, etc.) of individual battery cells, or groups of battery cells, within the battery enclosure. Conventional sensors, such as voltage, temperature, and current sensors, often fail to detect abnormal battery conditions (e.g., such as arcing, fires, damage to the battery module enclosure, etc.) in a timely manner or at all before permanent damage is done to the one or more battery cells. A battery monitoring module, according to the present disclosure, allows the techniques of the present disclosure to be applied to a battery module in an electric vehicle. The battery monitoring module in the present disclosure determines a light characteristic within the battery module based on the sensor signal from the light sensor configured to detect light within the battery module. The detection of light within the battery module (e.g., within a sealed, battery enclosure) may indicate the presence of one or more battery cells within the battery module operating under abnormal battery conditions or damage to the battery module enclosure. For example, in some embodiments, the battery monitoring module may receive a sensor signal from a light sensor mounted inside the battery module. The battery monitoring module may determine, using processing circuitry, that the sensor signal indicates the presence of visible light within the battery module (e.g., by determining the frequency of the light from the sensor signal). The battery monitoring module may determine, from the presence of the visible light within the battery module, that a fire broke out inside the battery module, as a fire gives off visible light wavelengths. The battery monitoring module may detect the fire faster than a conventional temperature sensor, as it may take longer for the temperature within the entire battery module to rise to a temperature that indicates an abnormal battery condition than a near-instantaneous detection of light. FIG.1shows a system diagram of an illustrative battery monitoring module100, in accordance with some embodiments of the present disclosure. Battery monitoring module100may comprise of battery module102and processing module104. Processing module104includes software108and storage110. In some embodiments, processing module104may implement battery monitoring module100using software108. Software108may be configured to analyze data sent from battery module102. Software108may be configured to trigger warning alerts when various battery parameters fall outside of the expected operating limits of the monitored battery cells. The expected operating limits may be set by user input, or may be default limits preconfigured in software108. In some embodiments, processing module104executes instructions for a battery monitoring module stored in memory. Storage110may include an electronic storage device, such a memory. For example, storage110may be configured to store electronic data, computer software, or firmware, and may include random-access memory, read-only memory, hard drives, optical drives, solid state devices, or any other suitable fixed or removable storage devices, and/or any combination of the same. Nonvolatile memory may also be used (e.g., to launch a boot-up routine and other instructions). In some embodiments, processing module104may include a processor, a power supply, power management components (e.g., relays, filters, voltage regulators), input/output IO (e.g., GPIO, analog, digital), memory, communications equipment (e.g., CANbus hardware, Modbus hardware, or a WiFi module), any other suitable components, or any combination thereof. In some embodiments, processing module104may include one or more microprocessors, microcontrollers, digital signal processors, programmable logic devices, field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), etc., and may include a multi-core processor. In some embodiments, processing module104may be distributed across multiple separate processors or processing units, for example, multiple of the same type of processing units or multiple different processors. Battery module102includes battery cells112, one or more light-detecting sensors, including first light-detecting sensor114and second light-detecting sensor116. It will be understood that any suitable number of light detecting sensors may be used. Light detecting sensors are discussed in more detail inFIG.5. The one or more light-detecting sensors may be equipped with optical filter118. Optical filter118may be a device that selectively transmits light of different wavelengths. For example, optical filter118may be an absorption filter (e.g., a filter made of various compounds added to glass or plastic, where the compounds absorb some wavelengths of light while transmitting others), a dichroic filter (e.g., a filter made by coating glass substrate with a series of optical coatings to reflect unwanted wavelengths and transmitting the remainder). In some embodiments, optical filter118may not be included or may be a device that transmits all light on the wavelength spectrum. In some embodiments, optical filter118may be a device that transmits light only in the visible spectrum (e.g., approximately 390 nm to 700 nm). In some embodiments, optical filter118may be a device that transmits light only in ultraviolet spectrum (e.g., approximately 100 nm to 400 nm). In some embodiments, optical filter118may be a device that transmits light only in the infrared spectrum (e.g., approximately 700 nm to 1 mm). And in some embodiments, optical filter118may be a device that transmits light in a select few bands of the wavelength spectrum (e.g., only the infrared, visible, and ultraviolet bands of the spectrum). In some embodiments, first light-detecting sensor114and second light-detecting sensor116may each be fitted with an optical filter such as optical filter118(e.g., the optical filter may be built into the light-detecting sensor, or the optical filter may be a separate piece of hardware coupled to the light-detecting sensor). Optical filter118may filter certain wavelengths (i.e., filter all wavelengths outside of the visible spectrum). In some embodiments, optical filter118may be coupled to a light-detecting sensor to filter out light generated by one or more electrical components of battery module102from the light generated by an undesirable condition (e.g., a battery condition). For example, optical filter118may be coupled to light-detecting sensor114to filter out light generated by one or more electrical components (e.g., light emitting diodes (LEDs) that emit light in the ultraviolet, visible, and/or infrared wavelengths) contained within battery module102. Battery module102may also be equipped with light emitter120. Light emitter120may be any device that emits light (e.g., an LED). Light emitter120may be controlled by processing module104. For example, processing module104may transmit a signal (e.g., generate a current) to the turn on light emitter120(i.e., cause emitter120to emit light). In some embodiments, battery module102may include auxiliary sensors122. Auxiliary sensors122may include voltage sensor124, current sensor126, or temperature sensor128, or any combination of the above. Auxiliary sensors122may be coupled to battery cells112. Auxiliary sensors122may transmit signal data corresponding to battery data (e.g., current, voltage, and temperature data) to processing module104. In some embodiments, multiple auxiliary sensors in tandem can be used to transmit signal data corresponding to battery data. In some embodiments, battery module102may process signals from one or more of auxiliary sensors122or first light-detecting sensor114which may be, but need not be, included in battery monitoring module100. Sensors (e.g., including light-detecting sensors and auxiliary sensors) may include sensors for sensing voltage, current, impedance, temperature, any other suitable parameter, or any combination of parameters. For example, auxiliary sensors122may include voltage sensor124, which measures voltage across suitable terminals of at least one battery cell within battery cells112. As another example, battery cells112may be electrically connected to busbars (e.g., subsets of batteries can be connected in parallel and different subsets can be connected in series) and voltage sensor124may measure voltage across suitable busbars. In a further example, auxiliary sensors122may include temperature sensor128coupled to battery cells112(e.g., to determine if battery cells112is overheating). It will be understood that multiple auxiliary sensors122of the same type may be used to measure the same property at different locations or across different components of battery module102. In some embodiments, for example, processing module104may differentiate light generated by electrical components and light generated by an undesirable condition. In some embodiments, for example, electrical components, which may be, but are not necessarily inside battery enclosure102, may cause light of certain wavelengths to vary in intensity in a known way (e.g., as a function of battery load). Processing module102may subtract out the known wavelengths or its known signature from the overall light signal or otherwise ignore these known wavelengths. Processing module102may retrieve the wavelengths that vary in a known way from a database (e.g., a look up table) tabulated with editor-defined data (e.g., data provided by manufacturers of the components), user-defined data (e.g., determined from experimental results), or a combination of the above. Processing module104may be coupled to the electronics (e.g., light-detecting sensors, light emitters, light filters, auxiliary sensors, etc.) within battery module102. Processing module104may also include one or more switches (e.g., switch106) coupled between processing module104and battery module102. For example, switch106may be a SPDT relay, and light-detecting sensors114and116, and auxiliary sensors122may include additional switched terminals for determining the position of switch106(e.g., using a lower voltage/power circuit). Processing module104may be coupled to switch106and components in battery module102using any suitable wired, or non-wired, coupling. For example, processing module104may be coupled to switch106using suitable cables, having any suitable terminations (e.g., plugs, screw down terminals, soldered connections). In a further example, processing module104may communicate wirelessly (e.g., using WiFi, or Bluetooth) with switch106, which may each include a transceiver to receive communication and actuate the corresponding switch (e.g., which may also include a power supply). In some embodiments, light-detecting sensor114may detect the presence of light (e.g., the sensor is triggered). Light-detecting sensor114may send a signal containing information about the light (e.g., properties of the light such as wavelength, luminescence, intensity, etc.) to processing module104. Processing module104may retrieve software108from storage110. Software108may be configured to analyze the information from the signal sent from sensor114. In some embodiments, light-detecting sensor114may be activated when light (e.g., photons) is detected. Light-detecting sensor114may generate a current, where the amplitude of the current is proportional to the amount of light received. Processing module104may receive multiple signals corresponding to the current generated by light-detecting sensor114, where the signals contain various current amplitude datum, each corresponding to a different wavelength of light or wavelength ranges of light. In some embodiments, the detection of light from light-detecting sensor114may precede detection of a possible battery fault by commonly monitored parameters, such as temperature, voltage and current. The detection of light may also provide a redundant measurement and confirmation of failures detected by other sensor types. It can reduce false positives, identify actual failure more frequently, and simplify meeting safety standards for functional safety (ISO26262). For example, light-detecting sensor114may detect electrical arcing at low currents that may not cause a battery fuse to blow. Furthermore, electrical arcing at low currents may initially appear as anomalous current and voltage behavior that does not automatically drive immediate action. However, if there is also a simultaneous detection of the presence of light, the severity of the anomaly could be gauged, leading to the correct reactions to be implemented sooner, before a potential safety issue can worsen. In some embodiments, software108may be configured to take in inputs from light-detecting sensor114, and any other light-detecting sensors, and auxiliary sensors122. Software108may analyze the one or more light signals to determine what frequencies of light are detected (e.g., ultraviolet light, visible light, blue light, infrared light, and low frequency infrared light). In some embodiments, for example, when there is more than one light-detecting sensor (e.g., four such sensors, one on each side of battery enclosure102), processing module104may determine that there is light within the battery enclosure if at least one light-detecting sensor is triggered (e.g., sends a signal to processing module104indicating that light was detected). In some embodiments, for example, when there is more than one light-detecting sensor (e.g., four such sensors, one on each side of battery enclosure102), processing module104may determine that there is light within the battery enclosure if at least half of the total amount of light detecting sensors are triggered (e.g., send signals to processing module104indicating that light was detected). In some embodiments, for example, when there is more than one light-detecting sensor (e.g., four such sensors, one on each side of battery enclosure102), processing module104may determine that there is light within the battery enclosure if at least a threshold number of light-detecting sensors are triggered (e.g., send signals to processing module104indicating that light was detected). The threshold number of light-detecting sensors may be retrieved from storage110, or via a network on a remote server. The threshold number of light-detecting sensors may be editor-defined, or determined based on user input. In some embodiments, software108may also take in inputs from auxiliary sensors122. Software108may retrieve normal operating condition boundaries (e.g., temperature boundaries, voltage boundaries, current boundaries, etc.) from storage110, or via a network on a remote server. Normal operation condition boundaries may be determined by test data, input by a user, default values defined by an editor, or any combination of the above. Software108may compare the data inputs (e.g., battery monitoring signals) from auxiliary sensors122(e.g., voltage sensor124, current sensor126, temperature sensor128, etc.) to the normal operating condition boundaries to determine if one or more of auxiliary sensors122indicate that battery cells112are operating under abnormal conditions. In some embodiments, software108may determine that the input from the battery monitoring signal from current sensor126indicates that the current in battery cells112is higher than the upper bound of the normal current operating conditions, and may determine that the input from light-detecting sensor114indicates the presence of light within the battery enclosure. In response to determining the indication of the presence of light in the battery enclosure, and the higher than normal current, software108may generate a warning alert to the user about the occurrence of a possible battery fault. As referred to herein, a battery fault is any condition that contributes to a battery operating outside of its normal operating conditions, including a breach of the battery cell packaging, one or more battery cells experiencing arcing, a short circuit, a fire, sparking, etc. In some embodiments, software108may automatically power off the battery cells (e.g., using active switch106as a kill switch) in response to determining the occurrence of a possible battery fault. In some embodiments, software108may automatically power off the entire battery module102using active switch106in response to determining the occurrence of a possible battery fault. In some embodiments, software108may automatically power off one or more battery cells112in battery module102using active switch106in response to determining the occurrence of a possible battery fault. In some embodiments, for example, software108may determine that the input from the battery monitoring signal from current sensor126indicates that the current in battery cells112is higher than the upper bound of the normal current operating conditions, and may determine that the input from light-detecting sensor114indicates that a light wavelength in the ultraviolet range was detected. Software108may determine (e.g., via a lookup table of possible fault conditions using the inputs high current and UV light) that the fault condition is arcing. In some embodiments, for example, software108may determine that the light signal or signals from light-detecting sensor114indicates that white, blue, and ultraviolet light was detected. Software108may determine (e.g., via a lookup table) that the fault condition is sparking in battery module102. In some embodiments, for example, software108may determine that the input from the battery monitoring signal from temperature sensor128indicates that the temperature in battery cells112is elevated (e.g., higher than normal, but may still be within normal operating range), and may determine that the input from light-detecting sensor114indicates that a light wavelength in the ultraviolet range was detected. Software108may determine (e.g., via a lookup table of possible fault conditions using the inputs elevated temperature and UV light) that the fault condition is arcing. In some embodiments, for example, software108may determine that the input from the battery monitoring signal from current sensor126indicates that the current in battery cells112is higher than the upper bound of the normal current operating conditions, and may determine that the input from light-detecting sensor114indicates that light wavelengths in both the visible range and the ultraviolet range were detected. Software108may determine (e.g., via a lookup table of possible fault conditions using the inputs high current and UV and IR light waves) that the fault condition is arcing. In some embodiments, for example, software108may determine that the input from the battery monitoring signal from temperature sensor128indicates that the temperature in battery cells112is elevated, and may determine that the input from light-detecting sensor114indicates that a light wavelength in the visible range was detected. Software108may determine (e.g., via a lookup table of possible fault conditions using the elevated temperature and visible light) that the fault condition is a fire and that the components in battery module102are heating up. In some embodiments, for example, software108may determine that the input from the battery monitoring signal from temperature sensor128indicates that the temperature in battery cells112is elevated, and may determine that the input from light-detecting sensor114indicates that a light wavelength in the low frequency IR range was detected. Software108may determine (e.g., via a lookup table of possible fault conditions using the elevated temperature and low frequency IR light) that the fault condition is a fire. In some embodiments, for example, software108may determine that the inputs from auxiliary sensors122indicates that battery cells112are within normal operating ranges. Software108may determine that the input from light-detecting sensor114indicates that light was detected in the battery enclosure. Software108may determine (e.g., via a lookup table of possible fault conditions using the normal operating conditions and detection of light) that the fault condition is a breach of the battery enclosure. FIG.2shows a system diagram of an illustrative battery monitoring module200, in accordance with some embodiments of the present disclosure. In some embodiments, battery monitoring module200corresponds to battery module102ofFIG.1. Battery monitoring module200comprises battery enclosure201. Battery enclosure201may fully encompass one or more battery cells such as, for example, battery cell202. Battery cell202is found within battery enclosure201. There may be multiple battery cells connected in series or parallel, or a combination of the two, via one or more busbars within battery enclosure201. Light-detecting sensors204,206,208, and210are positioned within battery enclosure201. Light-detecting sensors204,206,208,210may be positioned at any place within battery enclosure201. It will be understood that while four light-detecting sensors are positioned within battery enclosure201, any suitable number of light-detecting sensors can be used such as one, two, three, four, five, or more sensors. In some embodiments, light-detecting sensor204may be placed in the middle of the top edge equidistance from the top corners of battery enclosure201, light-detecting sensor206may be placed in the middle of the right edge equidistant from the right corners of battery enclosure201, light-detecting sensor208may be placed in the middle of the bottom edge equidistant from the bottom corners of battery enclosure201, and light-detecting sensor110may be placed in the middle of the left edge equidistant from the left corners of battery enclosure201. In some embodiments, light-detecting sensors204,206,208, and210may be arranged nearer to positions where certain battery abnormalities are more likely to happen. The positions where certain battery abnormalities are more likely to happen may be determined based on testing data, and may also be user-defined. In some embodiments, battery monitoring module201includes battery circuitry212. Battery circuitry212may include power electronics, monitoring circuitry, switches, including switch220(e.g., a cutoff switch), and sensors, including temperature sensor214, voltage sensor216, and current sensor218. In some embodiments, the interior of the battery enclosure may be coated with reflective coating222(e.g., a coating with reflective or luminescent properties, or a combination of both properties). For example, reflective coating222may include properties that allows the coating to reflect light (e.g., a suitable reflectivity). In a further example, reflective coating222may include properties that allows the coating to luminesce light at a suitable wavelength based on absorbed light at one or more wavelengths. For example, there may be areas inside battery enclosure201where it is difficult or impossible to place a light-detecting sensor (e.g., light-detecting sensor204), or areas out of range of a light-detecting sensor. These areas may be coated with reflective coating222, which may reflect light back into the range of a light-detecting sensor (e.g., one or more of light-detecting sensors204,206,208, and210). Reflective coating222may be applied to battery enclosure201during the manufacturing process of battery enclosure201, or may be applied after the manufacturing of battery enclosure201, or a combination of both. In some embodiments, the exterior of battery cells112may be coated with reflective coating222. For example, reflective coating222may be applied to battery cells112to reflect light back into the range of a light-detecting sensor (e.g., one or more of light-detecting sensors204,206,208, and210). In some embodiments, instead of components (e.g., the battery enclosure, battery cells, etc.) being coated with reflective coating222, the components may be made from materials that contain reflective properties (e.g., glass, polished metal, etc.). In some embodiments, instead of components (e.g., the battery enclosure, battery cells, etc.) being coated with reflective coating222, the components may be processed to increase reflectivity. For example, the interior of the battery enclosure may be polished to increase the reflectivity. Battery monitoring module200also includes light emitter224. Light emitter224may be any device that emits light. The functionality of light emitter224is discussed in more detail inFIG.4. Light emitter224may be controlled via processing module212. When battery monitoring module200corresponds to battery module102ofFIG.1, light emitter224may be controlled via processing module104. FIG.3shows a system diagram of an illustrative battery monitoring module300experiencing a battery condition, in accordance with some embodiments of the present disclosure. Light-detecting sensors302,304,306, and308are placed within battery enclosure301. For example, in some embodiments, light-detecting sensors302,304,306, and308are placed on the perimeter of battery enclosure301. Battery enclosure301may include a plurality of battery cells, including battery cell310. In some embodiments, battery module300corresponds to battery module200ofFIG.2experiencing a battery condition. In some embodiments, battery condition312(e.g., an arcing condition, a fire, etc.) may occur proximate to battery cell310. For example, in some embodiments, battery cell310may experience arcing, which may generate wavelengths in the ultraviolet (UV), visible light, and infrared (IR) wavelength range. Battery condition312may generate light (e.g., composed of one or more wavelengths), which may be represented by light propagation lines314,316, and318. Light propagation lines314,316, and318indicate light propagating through the battery module towards the sensors (e.g., light-detecting sensors302,304, and308). Once the light propagation reaches light-detecting sensor302, light-detecting sensor302may send one or more signals containing information about received light to a processing module such as processing module104. In some embodiments, the processing module may determine the location of battery condition312by determining the differences in time between when the light reaches light-detecting sensor302,304, and308. In some embodiments, the time differences can be determined by identifying common fiducial points in the sensor signals determining the times corresponding to the fiducial points. Based on the time differences, the processing module may triangulate the location of battery condition312, using normal triangulation means. In some embodiments, processing module104may determine the location of battery condition312by comparing the amplitudes the light-detecting sensor signals. In some embodiments, the signal amplitudes indicate how much light is receive by each sensor. It is expected that a light-detecting sensor will receive more light the closer it is to a battery condition. Accordingly, based on the signal amplitudes, the processing module may triangulate the location of battery condition312, using normal triangulation techniques. FIG.4shows a system diagram of illustrative battery monitoring module400that includes light emitter404, in accordance with some embodiments of the present disclosure. In some embodiments, battery monitoring module200corresponds to battery module102ofFIG.1or battery monitoring module200ofFIG.2. Battery monitoring module400comprises battery enclosure401that may fully encompass one or more battery cells. Battery enclosure401also includes light-detecting sensor402and light emitter404. Light-detecting sensor402and light emitter404may be mounted in any suitable location inside of battery enclosure401. In some embodiments, two or more light-detecting sensors and two or more light emitters may be mounted inside of battery enclosure401. Light emitter404may be any device that emits light, such as a light emitting device (LED), organic light emitting device (OLED), laser, incandescent light, filament, etc. Light emitter404may be controlled by a processing module such as processing module104ofFIG.1. The processing module may “turn on” (e.g., send a signal to light emitter404to generate light) light emitter404on a regularly scheduled interval, based on a user command, or any combination of the foregoing. The light emitted by light emitter404can be used to determine whether a smoke condition exists inside of battery enclosure401. The amount of light from light emitter404reaching light-detecting sensor402is expected to be affected by the presence of smoke. For example, the soot in smoke may absorb and also reflect light. Depending on where light-detecting sensor402and light emitter404are positioned and the content of the smoke, the amount of light received during a smoke condition may decrease or increase. Accordingly, when light emitter404is generating light, the signal from light-detecting sensor402can be recorded and analyzed to determine whether smoke is present inside of battery enclosure401. In some embodiments, a processing module (e.g., processing module104ofFIG.1) may use software (e.g., software108ofFIG.1) to compare the signal from light-detecting sensor402to a “control” signal stored in memory (e.g., storage110ofFIG.1). The “control” signal may be a sensor signal or a signal property (e.g., signal amplitude) taken during the manufacture and setup process of the battery enclosure. The processing module may compare one or more properties of the signals (e.g., the amplitude of each signal). The “control” signal may also be updated periodically as battery monitoring module400ages. For example, the “control” signal may be updated after a predetermined amount of time, predetermined amount of use, or a predetermined amount of charges. In some embodiments, the processing module may determine that smoke is present or a similar battery condition exists within the battery enclosure when the amount of light received (e.g., amplitude) is lower than the control signal by more than a threshold amount. The threshold amount may be a value determined by test data and stored in memory (e.g., storage110ofFIG.1). In some embodiments, the processing module may determine that smoke is present or a similar battery condition exists within the battery enclosure when the amount of light received (e.g., amplitude) is greater than the control signal by more than a threshold amount. The processing module may also store the signal from the light-detecting sensor (e.g., in a database in storage110) to allow a user or operator to track and condition inside of battery enclosure401over a period of time. It will be understood that light-detecting sensor402may include any suitable imaging sensor disclosed herein. In some embodiments, light-detecting sensor402comprises an imaging sensor that takes an image inside of battery enclosure401when light emitter404is emitting light. In such embodiments, the brightness of the image, the color or colors of the image, or a combination thereof can be analyzed to determine whether a smoke or other condition is present. In some embodiments, the image can be analyzed by itself or in comparison to a “control” image to determine whether a smoke or other condition is present. FIG.5shows a system diagram of light-detecting sensor500, in accordance with some embodiments of the present disclosure. Light-detecting sensor500includes light-detecting sensor body502. Light-detecting sensor body502may be made out of any functional material, such as plastic, metal, ceramic etc. Light-detecting sensor body502contains the circuitry needed for the light-detecting sensor to function and sense the presence of light. For example, light-detecting sensor500may become electrically conductive when exposed to light and may generate a current (e.g., a signal) corresponding to the detected light. Top surface504is comprised of a photoconductive material (e.g., germanium, gallium, selenium, silicon with added dopants, etc.). Top surface504becomes electrically conductive due to the absorption of electromagnetic radiation (e.g., such as visible light, UV light, IR light, etc.). Filter506may be an optical filter that is separately coupled to light-detecting sensor500, or built into light-detecting sensor500during manufacturing process. Optical filter506may be a device that selectively transmits light of different wavelengths. For example, optical filter506may be an absorption filter (e.g., a filter made of various compounds added to glass or plastic, where the compounds absorb some wavelengths of light while transmitting others), a dichroic filter (e.g., a filter made by coating glass substrate with a series of optical coatings to reflect unwanted wavelengths and transmitting the remainder). In some embodiments, light-detecting sensor500may be powered and controlled via positive terminal508and negative terminal510. For example, light-detecting sensor500may be coupled to a switch (e.g., such as switch106ofFIG.1), where the switch is then coupled to processing module104ofFIG.1. Processing module104may close the switch to turn “on” light-detecting sensor500, and may “open” the switch to turn “off” light-detecting sensor500. In some embodiments, light-detecting sensor500is passive and current is generated across terminals508and510when photons are absorbed. In some embodiments, light detecting sensor500may include multiple sensor elements. For example, an array of sensor elements may be used, each with a different optical filter or no optical filter. The array of sensor elements may detect two or more of the following wavelengths of light: visible light, UV light, IR light, low frequency IR light, and any other suitable wavelengths of light. In some embodiments, light-detecting sensor500may be used as any of the light-detecting sensors disclosed herein. For example, light-detecting sensor500may be used as light-detecting sensor114and light-detecting sensor116and may include light filter118. In some embodiments, light-detecting sensor500may be mounted directly to battery enclosure201ofFIG.2, or any surface within battery enclosure201. In some embodiments, positive terminal508and negative terminal510may be coupled to processing module104ofFIG.1via connectors (e.g., using through-hole or surface mount soldering techniques). In some embodiments, light-detecting sensor500may be a photodetector, including photoemission devices (e.g., gaseous ionization detectors, photomultipliers, phototubes, microchannel plate detectors, etc.), semiconductor devices (e.g., active-pixel sensors, cadmium zinc telluride radiation detectors, charge-coupled devices, HgCdTe infrared detectors, reverse-biased Light Emitting Diodes (LEDs), photoresistors, photodiodes, phototransistors, quantum dot photoconductors, semiconductor detectors, silicon drift detectors, etc.), photovoltaic devices (e.g., photovoltaic cells), thermal devices (e.g., bolometers, cryogenic detectors, pyroelectric detectors, thermopiles, golay cells), photochemical devices (e.g., photoreceptor cells, chemical detectors), and polarization devices (e.g., polarization-sensitive photodetectors). FIG.6is a flowchart of an illustrative process600for monitoring a battery module, in accordance with some embodiments of the present disclosure. It should be noted that process600or any step thereof could be performed on, or provided by, any of the systems shown inFIGS.1-5. In addition, one or more steps of process600may be incorporated into or combined with one or more steps of any other processes or embodiments described herein. Process600begins at step602, where a processing module (e.g., processing module104ofFIG.1) receives a sensor signal from a light sensor (e.g., light-detecting sensor114). For example, at step602, processing module104may record, using software108, the input from light-detecting sensor114. As another example, at step602, processing module104may record the input from multiple light-detecting sensors (e.g., light-detecting sensors114and116). At604, processing module104determines, using processing circuitry, a light characteristic within the battery module based on the sensor signal. For example, the processing module may analyze the light signal to determine one or more characteristics (e.g., wavelength, intensity, etc.). At606, processing module104determines whether the one or more light characteristics indicate a battery condition. Processing module104may compare the light characteristics to a database containing battery conditions associated with light characteristics (e.g., a look up table). In response to the comparison, processing module104may retrieve a battery condition (e.g., arcing) from the database. In some embodiments, for example, at step606, processing module104may determine, using software108, that light is detected in battery module102. If at step606processing module104determines that “Yes,” the one or more light characteristics indicates a battery condition, then process606proceeds to616. In some embodiments, for example, at step606, processing module104may determine, using software108, that the determined light characteristic from the light signal from light-detecting sensor114indicates (e.g., based on the amplitude of the current signal) that white, blue, and ultraviolet light was detected. Software108may determine (e.g., via a lookup table) that there is sparking in battery module102. In some embodiments, for example, at step606, processing module104may determine, using software108, that there is no input from light-detecting sensor114or the input is below a threshold (e.g., a noise threshold), and therefore no light is detected in battery module102. If, at606, processing module104determines that “No,” the one or more light characteristics do not indicate a battery condition, then process606proceeds to612. In some embodiments, if, at606, processing module104determines that “No,” the one or more light characteristics do not indicate a battery condition, process606may bypass step612and proceed directly to step614. Step612is an optional step in process600. For example, if, at606, processing module104determines that “No,” the one or more light characteristics do not indicate a battery condition, then step606may proceed directly to step614, and processing module104may determine that the battery module is operating under normal conditions. In some embodiments, processing module104may receive a signal from an auxiliary sensor (e.g., voltage sensor124, current sensor126, or temperature sensor128ofFIG.1) in addition to the signal from light-detecting sensor114. In some embodiments, the processing module receives the battery monitoring signal from two or more auxiliary sensors. For example, the battery monitoring signal may be one or more of a voltage signal, current signal, and temperature signal. For example, at608, processing module104receives a battery monitoring signal. At610, processing module104determines one or more battery characteristics from the battery monitoring signal. Processing module104may determine the battery characteristics from the battery monitoring signal using the techniques described above. At612, processing module104determines whether the one or more light characteristics and battery characteristics indicate a battery condition. Processing module104may determine whether the characteristics (e.g., the light characteristics and/or the battery characteristics) indicate a battery condition using the techniques described above. For example, the processing module may determine whether the characteristics of the light (e.g., in the UV wavelength range) and the battery monitoring signals (e.g., a signal from current sensor126indicating an abnormally high current) indicate a battery condition (e.g., arcing). Processing module104may compare the light characteristics and battery characteristics to a database containing battery conditions associated with specific light characteristics and battery characteristics (e.g., a look up table). In response to the comparison, processing module104may retrieve a battery condition (e.g., arcing) from the database. For example, at step612, processing module104may determine, using software108, that the battery characteristic determined from the battery monitoring signal from current sensor126indicates that the current in battery cells112is higher than the upper bound of the normal current operating conditions, and may determine that the light characteristic determined from light-detecting sensor114indicates that a light wavelength in the ultraviolet range was detected. As another example, at step612, after processing module104determines, using software108, that the battery characteristic determined from the battery monitoring signal from current sensor126indicates that the current in battery cells112is higher than the upper bound of the normal current operating conditions, and that the light characteristic determined from the signal from light-detecting sensor114indicates that light wavelengths in both the visible range and the ultraviolet range were detected, then processing module104may determine (e.g., via a lookup table of possible fault conditions using the inputs of high current and UV and IR light waves) that the fault condition is arcing. As yet another example, at step612, after processing module104determines, using software108, that the battery characteristic determined from the battery monitoring signal from temperature sensor128indicates that the temperature in battery cells112is elevated and determines that the light characteristic determined from light-detecting sensor114indicates that a light wavelength in the visible range was detected, processing module104may determine (e.g., via a lookup table of possible fault conditions using the elevated temperature and visible light) that the fault condition is a fire and that the components in battery module104are heating up. If, at612, processing module104determines that “No,” the one or more light characteristics and battery characteristics do not indicate a battery condition, then the process proceeds to step614. If, at612, processing module104determines that “Yes,” the one or more light characteristics and battery characteristics indicate a battery condition, then the process proceeds to616. At614, processing module104determines that the batteries are operating under normal conditions. For example, in some embodiments, processing module104may determine that the characteristics do not indicate a battery condition and that the battery module is operating under normal conditions. At616, processing module104alerts the user of the battery condition. For example, in some embodiments, processing module104may alert the user by automatically shutting down the operation of the battery module (e.g., using active switch106as an emergency cutoff switch). In some embodiments, processing module104may alert the user by generating a pop-up message on a user device (e.g., a computer, mobile phone, smart phone, etc.) associated with battery monitoring module100. In some embodiments, for example, at step616, processing module104may automatically power off the battery cells (e.g., using active switch106as a kill switch) in response to determining the occurrence of a possible battery fault. In some embodiments, processing module104may automatically power off the entire battery module102using active switch106in response to determining the occurrence of a possible battery fault. In some embodiments, software108may automatically power off one or more battery cells112in battery module102using active switch106in response to determining the occurrence of a possible battery fault. FIG.7is a flowchart of an illustrative process for monitoring a battery module using a light source, in accordance with some embodiments of the present disclosure. It should be noted that process700or any step thereof could be performed on, or provided by, any of the systems shown inFIGS.1-5. In addition, one or more steps of process700may be incorporated into or combined with one or more steps of any other processes or embodiments described herein. Process700begins at702, where processing module104causes a light source to emit light within a battery module (e.g., battery module102). For example, processing module104may send a command to light emitter120to emit light. In some embodiments, processing module104may send a command to light emitter120to emit light at a specific wavelength, intensity, or combination thereof. At704, processing module104determines at least one light characteristic of a received light signal. Processing module104determines the at least one light characteristic of the received light signal using the methods described above. For example, processing module102may receive the light signal using light-detecting sensor114. Processing module104may determine, using software108, the light characteristic of the received light signal (e.g., using a look up table) by comparing the light signal to a datastore. At706, processing module104compares the at least one light characteristic to at least one reference characteristic. In some embodiments, processing module104retrieves the reference characteristic from a database in storage110, or from a remote server via a network connection. For example, processing module104may retrieve a reference characteristic (e.g., intensity of light) from storage110, where the reference characteristic is a control value determined by test data, or a control value input by a user, or a preprogrammed default control value, or a combination of the above. In some embodiments, processing module104may use software108to compare the characteristic determined from a signal from light-detecting sensor402to a “control” characteristic stored in memory (e.g., storage110ofFIG.1). The “control” characteristic may be a signal property (e.g., signal amplitude) taken during the manufacture and setup process of the battery enclosure. The processing module may compare one or more properties of the signals (e.g., the amplitude of each signal). The “control” characteristic may also be updated periodically as battery monitoring module104ages. For example, the “control” characteristic may be updated after a predetermined amount of time, predetermined amount of use, or a predetermined amount of charges. In some embodiments, for example, when the received light signal comprises an image of the inside of the battery enclosure, processing module104may compare the image to a control image (e.g., where the control image was taken at some point prior to the current image), and compare characteristics of the two images. In response to determining that the image has characteristics outside of a threshold range in comparison to the control image (e.g., the image is significantly darker than the control image), then processing module104may determine that particles caused by soot, smoke, or a potential solvent leakage are contained in battery module102, and that a battery fault condition is occurring. At708, processing module104determines a battery condition of the battery module based on the comparison. For example, processing module104may determine a battery condition from the comparison (e.g., smoke from a fire) using the methods as described above. In some embodiments, the processing module may determine that smoke is present or a similar battery condition exists within the battery enclosure when the characteristic determined from the amount of light received (e.g., amplitude) is lower than the control characteristic by more than a threshold amount. The threshold amount may be a value determined by test data and stored in memory (e.g., storage110ofFIG.1). In some embodiments, the processing module may determine that smoke is present or a similar battery condition exists within the battery enclosure when the characteristic determined from the amount of light received (e.g., amplitude) is greater than the control characteristic by more than a threshold amount. FIG.8is a flowchart of an illustrative process for detecting the location of a battery condition, in accordance with some embodiments of the present disclosure. It should be noted that process800or any step thereof could be performed on, or provided by, any of the systems shown inFIGS.1-5. In addition, one or more steps of process800may be incorporated into or combined with one or more steps of any other processes or embodiments described herein. Process800begins at802, where processing module104receives a signal from a light sensor inside battery module102corresponding to an intensity of light of a light source. For example, at process802, processing module104may receive a signal from light-detecting sensor114corresponding to detected light from a light source, where the amplitude of the signal represents the intensity of the light. At804, processing module104receives at least one additional sensor signal from at least one additional light sensor inside battery module102located at a different position within battery module102. For example, at804, processing module104may receive sensor signals corresponding to light propagations lines314,316, and318, which indicate light propagating through the battery module towards the sensors (e.g., light-detecting sensors302,304, and308). Once light propagation line314reaches light-detecting sensor302, light-detecting sensor302may send a signal containing information about the received light to processing module104. At806, processing module104determines at least one characteristic of each of the received signals. For example, processing module104may determine the intensity of light for each of the received signals using the amplitudes of the received signals. Processing module104may determine the intensity from the amplitude by retrieving the intensity from a datastore that links amplitudes to their corresponding intensities. As another example, processing module104may determine the arrival time of fiducial points in the received signals. At808, processing module104determines a location of the source of light within the battery module based on the determined characteristics of the received light signals. For example, processing module104may determine the location of the source of light within the battery module based on the intensities of light received at the sensors using known signal triangulation techniques. In some embodiments, for example, at808processing module104may determine the location of battery condition312by comparing the amplitudes of the light-detecting sensor signals. In some embodiments, the signal amplitudes indicate how much light is receive by each sensor. It is expected that a light-detecting sensor will receive more light the closer it is to a battery condition. Accordingly, based on the signal amplitudes, the processing module may triangulate the location of battery condition312, using normal triangulation techniques. In some embodiments, for example, at808processing module104may determine the location of battery condition312by determining differences in time between when light reaches the different light-detecting sensors (e.g., light-detecting sensors302,304, and308). In some embodiments, the time differences can be determined by identifying common fiducial points in the sensor signals (e.g., a peak or valley of a signal or a derivative of the signal) and determining the differences in arrival times corresponding to the fiducial points. Based on the time differences, the processing module may triangulate the location of battery condition312, using normal triangulation means. In some embodiments, for example, in response to determining the location of battery condition312, processing module104may isolate battery condition312using active switch106, or any other suitable control means. For example, processing module104may determine that battery condition312is confined to two battery cells (e.g., two cells in battery cells112). The battery cells may be coupled to the busbar with one or more active switches. Processing module104may, by controlling the one or more active switch, uncouple the two batteries from the busbar to mitigate battery condition312. The above-described embodiments of the present disclosure are presented for purposes of illustration and not of limitation, and the present disclosure is limited only by the claims that follow. Additionally, it should be noted that any of the devices or equipment discussed in relation toFIGS.1-5could be used to perform one or more of the steps in processes600-800inFIGS.6-8, respectively. Furthermore, it should be noted that the features and limitations described in any one embodiment may be applied to any other embodiment herein, and flowcharts or examples relating to one embodiment may be combined with any other embodiments in a suitable manner, done in different orders, performed with addition steps, performed with omitted steps, or done in parallel. For example, each of these steps may be performed in any order or in parallel or substantially simultaneously to reduce lag or increase the speed of the system or method. In addition, the systems and methods described herein may be performed in real time. It should also be noted that the systems and/or methods described above may be applied to, or used in accordance with, other systems and/or methods. The foregoing is merely illustrative of the principles of this disclosure and various modifications may be made by those skilled in the art without departing from the scope of this disclosure. The above described embodiments are presented for purposes of illustration and not of limitation. The present disclosure also can take many forms other than those explicitly described herein. Accordingly, it is emphasized that this disclosure is not limited to the explicitly disclosed methods, systems, and apparatuses, but is intended to include variations to and modifications thereof, which are within the spirit of the following claims.
54,787
11860233
DETAILED DESCRIPTION OF THE INVENTION An example of embodiments of the present disclosure will be described below with reference to the accompanying drawings. In the drawings, the same or equivalent components are denoted by the same reference character. (Battery Unit) FIG.1is an exploded perspective view illustrating a battery unit according to the present embodiment.FIG.2Ais a side view illustrating an example of a battery module that can be included in the battery unit illustrated inFIG.1. The battery unit100illustrated inFIG.1is a battery pack (also referred to as an intelligent power unit: IPU) mountable on an electric vehicle, such as a hybrid electric vehicle (HEV), a hybrid electric vehicle with an external power supply function (plug-in hybrid electric vehicle: PHEV), or a battery electric vehicle (BEV). As illustrated inFIGS.1and2A, the battery unit100includes, as main components, battery modules110, battery heat flow detectors120, a reference heat flow detector130, a voltage detector141, a current detector142, temperature detectors143, and a battery management system (BMS)200. In the example illustrated inFIG.1, the components of the battery unit100are housed in a case101and covered with a cover102. In the example illustrated inFIG.1, the battery unit100further includes a lower frame103and an upper frame104. The battery unit100further includes a lower cooling plate105for cooling the battery modules110. The battery unit100further includes a mechanism106(e.g., a fan, a cooling air duct, and an intake duct) that introduces air to cool the battery modules110. As illustrated inFIG.2A, each battery module110has, as main components, a stack112including a plurality of battery cells111stacked together, a pair of end plates113sandwiching the stack112in the stacking direction, and a cell bus bar114connecting the plurality of battery cells111to each other. As illustrated inFIG.1, the plurality of battery modules110may be connected to each other via a module bus bar119. The battery cells111may be any type of battery cell, non-limiting examples of which include lithium-ion batteries. Among such lithium-ion batteries, the following battery is preferable: a lithium-ion battery with a negative electrode containing a material that generates heat due to a phase transition or the like, such as graphite; or a lithium-ion battery with a positive electrode containing a material that generates heat due to a phase transition or the like, such as lithium cobalt oxide (LCO) as a layered compound or lithium nickel oxide (LNO) as a layered compound. In the following, a lithium-ion battery will be described which includes a negative electrode containing graphite as a material that generates heat due to a phase transition or the like; and a positive electrode containing lithium nickel cobalt manganese oxide (NCM) as a layered compound (that is, for the lithium-ion battery to be described below, a SOC of 0% is mainly determined depending on a potential of the negative electrode; negative electrode cut). Note that the present disclosure can be similarly applied to a lithium-ion battery (whose SOC of 0% is mainly determined depending on a potential of the positive electrode; positive electrode cut), which includes a positive electrode containing a material such as LCO or LNO that generates heat due to a phase transition or the like. The battery heat flow detectors120are heat flow sensors that detect a heat flow of the battery cells111and the battery unit100. In other words, the heat flow detected by the battery heat flow detectors120is composed of not only the heat flow of the battery cells111, but also a heat flow affected by various heat flows in the battery unit100, namely effects of noise. The heat flow sensor may be any type of sensor, non-limiting example of which include temperature sensors such as a Peltier element, a thermopile, and a thermocouple. Among these sensors, a Peltier element that has high heat flow sensitivity and can also be used as a temperature control device is preferable. As illustrated inFIG.2A, a Peltier element for cooling the battery cells111may be disposed between the battery cells111and the cooling plate105. In this case, the Peltier element can be used for both heat flow detection and cooling. For example, the Peltier element can be used as a heat flow sensor when a heat flow is to be detected, and otherwise, it can be used as a cooler. It is only necessary for each battery heat flow detector120to be disposed on or adjacent to at least one of the battery cells111included in the battery module110. As illustrated inFIG.2A, the battery heat flow detectors120may be disposed on or adjacent to two of the battery cells111, the two being located next to the end plates113. The battery heat flow detector120may further be disposed on or adjacent to one battery cell111located at the center in the stacking direction of the battery cells111, in addition to the two battery cells111next to the end plates113. The reference heat flow detector130is a heat flow sensor that detects, as a reference heat flow, a heat flow of the battery unit100, the heat flow composed of various heat flows in the battery unit100, namely heat flows of noise. Similarly to the above, the heat flow sensor may be any type of sensor, non-limiting example of which include temperature sensors such as a Peltier element, a thermopile, and a thermocouple. Among these sensors, the Peltier element is preferable. The Peltier element can be used for both cooling the battery cells11and detecting the heat flow. The reference heat flow detector130is disposed in the battery unit100at a location where temperature fluctuation is small and heat capacity is large. For example, the reference heat flow detector(s)130can be disposed at any of the following locations (A) to (F). (A) Cooling Plate105for Cooling the Battery Modules110 For example, as illustrated inFIG.1, the cooling plate105is disposed in contact with the bottom surfaces of the battery modules110, and the reference heat flow detector130is disposed on or adjacent to a surface of the cooling plate105, the surface not facing the bottom surfaces of the battery cells111. Although the reference heat flow detector130can be disposed at any position with respect to the plurality of battery cells111, it may be disposed at, for example, a position corresponding to one battery cell111at the center in the stacking direction of the battery cells111. (B) End Plates113of the Battery Modules110 FIG.2Bis a side view illustrating another example of a battery module that can be included in the battery unit illustrated inFIG.1. As illustrated inFIG.2B, for example, the reference heat flow detector130may be disposed on or adjacent to a surface of each end plate113, the surface not facing the battery cells111. (C) Bus Bar114,119of the Battery Modules110 For example, the reference heat flow detector130may be disposed on or adjacent to a surface of the cell bus bar114connecting the battery cells to each other (seeFIG.2A), the surface not facing the battery cells111. For example, the reference heat flow detector130may be disposed on or adjacent to a surface of the module bus bar119connecting the battery modules110to each other (seeFIG.1), the surface not facing the battery cells111. Although the reference heat flow detector130can be disposed at any position with respect to the plurality of battery cells111, it may be disposed at, for example, a position corresponding to one battery cell131at the center in the stacking direction of the battery cells111. (D) Flange in the Battery Unit100 For example, as illustrated inFIG.1, the reference heat flow detector130may be disposed on or adjacent to a flange (joint) which is provided in the battery unit100and via which the battery modules are fixed. (E) Space within the Battery Unit100 For example, as illustrated inFIG.1, the reference heat flow detector130may be disposed in a floating manner in a space within the battery unit100. (F) Pipe Protecting a High-Voltage Conductor Wire For example, as illustrated inFIG.1, the reference heat flow detector130may be disposed inside or outside a pipe that protects a high-voltage conductor wire (e.g., inside the pipe if the pipe is exposed to outside air, or outside the pipe if the pipe is not exposed to outside air). The battery heat flow detectors120may be disposed on or adjacent to the two battery cells111that are next to the end plates113, and the reference heat flow detector130may be disposed on or adjacent to one of the battery cells111that is different from the two on or adjacent to which the battery heat flow detectors120are disposed (e.g., one battery cell111located at the center in the stacking direction of the battery cells111). The voltage detector141is a voltage sensor that detects an open circuit voltage or a closed circuit voltage of the battery cells111. The voltage detector141may be disposed at any location. For example, as illustrated inFIG.2A, the voltage detector141may be disposed on or adjacent to the battery module110. The current detector142is a current sensor that detects a current of the battery cells111. The current detector142may be disposed at any location. For example, as illustratedFIG.2A, the current detector142may be disposed on or adjacent to the battery module110. The temperature detectors143are temperature sensors that detect temperatures of the respective components. The temperature sensor may be any type of temperature sensor, a non-limiting example of which includes a thermocouple. As illustrated inFIG.2A, each temperature detector143is disposed on or adjacent to an associated one of the battery cells111and detects the temperature of the associated battery cell111. The temperature detectors143are also disposed at the positions where the battery heat flow detectors120are disposed, and detect temperatures of the heat flow detection positions. As illustrated inFIGS.1and2B, the temperature detectors143are also disposed at the positions where the reference heat flow detectors130are disposed, and detect temperatures of the heat flow detection positions. (Battery Management System: Battery State Estimator) The battery management system (BMS, also referred to as the electronic control unit: ECU)200performs overall control of the battery cells111, including charge/discharge control, over-charge protection, over-discharge protection, and monitoring of a state of the battery (e.g., a state of charge (SOC) or a state of health (SOH)) of the battery cells111. The battery management system200includes, as main components, a battery state estimator210and a storage220. The battery state estimator210includes, for example, an arithmetic processor, such as a digital signal processor (DSP) and a field-programmable gate array (FPGA). The battery state estimator210performs various functions by executing, for example, predetermined software (programs) stored in the storage220. The various functions of the battery state estimator210may be performed by way of cooperation of hardware and software, or may be performed only by hardware (electronic circuitry). For example, the storage220is a rewritable memory, such as an EEPROM. The storage220stores the predetermined software (programs) for allowing the battery state estimator210to perform the above-mentioned various functions. The storage220stores, in a table map format, characteristics (OCV vs. SOC characteristics) relating to a correlation between the open circuit voltage and the SOC of the battery cells111in, for example, an initial state, which are a plurality of characteristics of the battery cells111each associated with a temperature. As illustrated inFIG.3A, the storage220stores, in a table map format, characteristics relating to a correlation between the closed circuit voltage and the SOC of the battery cells111in, for example, the initial state (CCV vs. SOC characteristics), which are a plurality of characteristics of the battery cells111each associated with a temperature and a current (charge). As illustrated inFIG.3A, the storage220stores, in a table map format, characteristics relating to a correlation between a heat flow and the SOC of the battery cells111in the initial state (HF vs. SOC initial characteristics), which are a plurality of characteristics of the battery cells111each associated with a temperature and a current (charge). As illustrated inFIG.3A, the storage220stores, in a table map format, characteristics relating to a correlation between an enthalpy potential and the SOC of the battery cells111in the initial state (UH vs. SOC initial characteristics), which are a plurality of characteristics of the battery cells111each associated with a temperature and a current (charge). Here, the enthalpy potential UH is a parameter calculated according to the following formula that is based on the heat flow HF, the closed circuit voltage CCV, and a current I of the battery cells111(see Non-Patent Document 1). UH=CCV−HF/I When charge and discharge are not being performed, for example, when the vehicle is at standstill during actual use, the battery state estimator210estimates a SoC of the battery cells111that corresponds to an OCV of the battery cells111detected by the voltage detector141, by referring to the table maps of the OCV vs. SOC characteristics stored in the storage220. Further, at the time of a start of charge that is performed, for example, when the vehicle is at standstill during actual use, the battery state estimator210estimates, as a start SOC, a SOC of the battery cells111that corresponds to an OCV of the battery cells111detected by the voltage detector141, by referring to the table maps of the OCV vs. SOC characteristics stored in the storage220. Further, at the time of charge that is performed, for example, when the vehicle is at standstill during actual use, the battery state estimator210measures HF vs. SOC present characteristics as illustrated inFIGS.3B and3C, and determines a SOC(HF) of the battery cells based on a peak of differential characteristics of the measured HF vs. SOC present characteristics. At this time, the battery state estimator210calculates a SOC(OCV) of the battery cells that corresponds to the above-mentioned SOC(HF), based on the start SOC estimated from the table maps of the OCV vs. SOC characteristics described above. In a case where the SOC(OCV) calculated from the table maps of the OCV vs. SOC characteristics deviates with respect to the SOC(HF) determined based on the peak of the differential characteristics of the HF vs. SOC present characteristics by an amount of deviation equal to or greater than a predetermined value, the battery state estimator210corrects the table maps of the OCV vs. SOC characteristics based on the amount of deviation. The details of this correction (SOC Estimation Correction 1) will be described later. Alternatively, at the time of charge that is performed, for example, when the vehicle is at standstill during actual use, the battery state estimator210measures UH vs. SOC present characteristics of the battery cells111as illustrated inFIGS.3B and3C, and determines a SOC(UH) of the battery cells based on a peak of differential characteristics of the measured UH vs. SOC present characteristics. At this time, the battery state estimator210calculates a SOC(OCV) of the battery cells that corresponds to the above-mentioned SOC(UH), based on the start SOC estimated from the table maps of the OCV vs. SOC characteristics described above. In a case where the SOC(OCV) calculated from the table maps of the OCV vs. SOC characteristics deviates with respect to the SOC(UH) determined based on the peak of the differential characteristic of the UH vs. SOC present characteristics by an amount of deviation equal to or greater than a predetermined value, the battery state estimator210corrects the table maps of the OCV vs. SOC characteristics based on the amount of deviation. The details of this correction (SOC Estimation Correction 2) will be described later. Note that as the heat flow HF of the battery cells111, a heat flow detected by the battery heat flow detector120may be used as it is. Alternatively, as the heat flow HF of the battery cells111, a heat flow calculated by subtracting a reference heat flow detected by the reference heat flow detector130from a heat flow detected by the battery heat flow detector120may be used. This makes it possible to determine the heat flow of the battery cells111excluding effects of various heat flows in the battery unit100, that is, excluding the effects of noise. A heat flow of the battery cells111on the positive electrode side and a heat flow of the battery cells111on the negative electrode side may be averaged to be defined as the heat flow HF of the battery cells111. Here, to estimate a SOC, the OCV vs. SOC characteristics of the battery cells are stored in advance in the form of a plurality of table maps associated with respective temperatures, and when charge and discharge are not being performed, for example, when the vehicle is at standstill during actual use, one of the table maps corresponding to a detected temperature is referred to, thereby a SOC corresponding to a detected OCV is estimated as the SOC of the battery cells. The above configuration makes it possible to accurately estimate a SOC on the basis of a voltage of battery cells in the case where the battery cells are of a type in which the voltage is inclined with respect to a change in capacity, such as a lithium-ion battery including hard carbon as a material for the negative electrode. Meanwhile, battery cells have been recently used in which a change in voltage is small relative to a change in capacity, such as a lithium ion-battery including graphite as a material for the negative electrode. In the case of a battery unit including battery cells of this type, estimating a SOC based on the voltage of the battery cells results in low estimation accuracy. Since the OCV vs. SOC characteristics gradually change due to degradation of battery cells, the OCV vs. SOC characteristics gradually deviate from the initial characteristics. In this respect, the present inventors have devised a method of correcting an error in SOC estimation for a lithium-ion battery including graphite, by way of measurement of the CCV vs. SOC present characteristics at the time of, for example, charge that is performed while a vehicle is at standstill during actual use. By performing charge with a constant current and at a low rate, the capacity mAh can be calculated based on the charge current mA and the charge time h. FIG.6is a graph corresponding to a Comparative Example and shows, as an example, the CCV vs. SOC characteristics and differential characteristics thereof, i.e., differential characteristics d(CCV)/d(SOC) of CCV characteristics CCV=f(SOC) with respect to the SOC. As illustrated inFIG.6, for example, in the case of a lithium-ion battery including graphite as a material for the negative electrode and NCM as a material for the positive electrode, the CCV vs. SOC differential characteristics haves a plurality of peaks due to phase transition or the like corresponding to the of graphite. However, two peaks are clear among the plurality of peaks. It should be noted that a lithium-ion battery including hard carbon does not have such a feature in which CCV vs. SOC characteristics have peaks due to a phase transition or the like. Accordingly, for the correction of SOC estimation of the Comparative Example, the table maps of the OCV vs. SOC characteristics are corrected in accordance with the amount of deviation of a SOC with respect to a SOC at a peak of CCV vs. SOC differential characteristics, the amount of the deviation being estimated from the table maps of the OCV vs. SOC characteristics. Specifically, at the time of a start of charge that is performed, for example, when the vehicle is at standstill during actual use, a SOC that corresponds to a measured OCV is estimated as a start SOC, with reference to the table maps of the OCV vs. SOC characteristics. At the time of charge that is performed, for example, when the vehicle is at standstill during actual use, CCV vs. SOC present characteristics are measured, and a peak of differential characteristics of the measured CCV vs. SOC present characteristics is detected. For example, with reference to a SOC at a peak of differential characteristics of previously stored CCV vs. SOC initial characteristics, the SOC at the detected peak is determined as a SOC(CCV). At this time, the charge capacity from the start of charge to the time of detection of the peak is calculated based on the charge current and the charge time, and the SOC(OCV) is calculated based on the calculated charge capacity and the start SOC estimated from the table maps of the OCV vs SOC characteristics at the start of charge as described above. In a case where the SOC(OCV) calculated from the table maps of the OCV vs. SOC characteristics deviates with respect to the SOC(CCV) determined based on the peak of the CCV vs. SOC differential characteristics by an amount of deviation equal to or greater than a predetermined value, the table maps of the OCV vs. SOC characteristics are corrected based on the amount of deviation. However, the peaks of the CCV vs. SOC differential characteristics have relatively small magnitudes, relatively obtuse spectra, and relatively small S/N ratios. In particular, when the battery cells are degraded, this disadvantage becomes noticeable. For this reason, it is expected that a relatively low estimation accuracy can be achieved even if the estimation of the start SOC is corrected based on the CCV vs. SOC differential characteristics. The present inventors have found that a SOC of battery cells correlates also with a heat flow HF of the battery cells caused by, for example, phase transition of an active material of an electrode material. The present inventors have further found that in comparison with the CCV vs. SOC differential characteristics, the HF vs. SOC differential characteristics have feature in which:peaks have larger magnitudes, shaper spectra, lager S/N ratios, and maintain these characteristics even when the battery cells are degraded;the number of the peaks is greater and the intervals between the peaks are shorter; andplus peaks and minus peaks have a respective specific pattern, and even when the battery cells are degraded, the specific pattern maintains a certain pattern, in other words, the peaks are unlikely to deviate in position relative to the SOC even when the battery cells are degraded. Accordingly, the present inventors have devised a method of correcting SOC estimation for battery cells, based on a heat flow of the battery cells, specifically, HF vs. SOC characteristics, and more specifically, HF vs. SOC differential characteristics (SOC Estimation Correction 1 to be described later). The present inventors have further found that in comparison with the CCV vs. SOC differential characteristics, UH vs. SOC differential characteristics also have feature in which:peaks have larger magnitudes, shaper spectra, lager S/N ratios, and maintain these characteristics even when the battery cells are degraded;the number of the peaks is greater and the intervals between the peaks are shorter; andplus peaks and minus peaks have a respective specific pattern, and even when the battery cells are degraded, the specific pattern maintains a certain pattern, in other words, the peaks are unlikely to deviate in position relative to the SOC even when the battery cells are degraded. Accordingly, the present inventors have devised a method of correcting SOC estimation for battery cells, based on an enthalpy potential of the battery cells, specifically, UH vs. SOC characteristics, and more specifically, UH vs. SOC differential characteristics (SOC Estimation Correction 2 to be described later). (SOC Estimation Correction 1) First, described is an example of correction of SOC estimation for the battery cells111that the battery state estimator210performs based on a heat flow of the battery cells111, specifically, the HF vs. SOC characteristics, and more specifically, the HF vs. SOC differential characteristics. When charge and discharge are not being performed, for example, when the vehicle is at standstill during actual use, the battery state estimator210periodically estimates a SOC of the battery cells111that corresponds to a detected OCV of the battery cells111, by referring to the table maps of the OCV vs. SOC characteristics. At the time of a start of charge that is performed, for example, when the vehicle is at standstill during actual use, the battery state estimator210estimates, as a start SOC, a SOC of the battery cells111that corresponds to a detected OCV of the battery cells111, by referring to the table maps of the OCV vs. SOC characteristics. Thereafter, during the charge that is performed, for example, when the vehicle is at standstill during actual use, the battery state estimator210measures HF vs. SOC present characteristics of the battery cells111as illustrated inFIGS.3B and3C, and detects peaks of differential characteristics of the measured HF vs. SOC present characteristics. FIG.4is a graph corresponding to the present embodiment, and shows, as an example, the HF vs. SOC characteristics and the differential characteristics thereof, i.e., differential characteristics d(HF)/d(SOC) of HF characteristics HF=f(SOC) with respect to the SOC. As shown inFIG.4, in the case of, for example, a lithium-ion battery including graphite as a material for the negative electrode and NCM as a material for the positive electrode, the HF vs. SOC differential characteristics have a plurality of peaks1to7caused by, for example, phase transition of the graphite. The positions (states of charge) of these peaks are unlikely to deviate even when the battery cells are degraded. The peaks7,6,5,4,3,2, and1are increasingly unlikely to deviate, in this order. Thus, the battery state estimator210determines, as a SOC(HF), the SOC at the detected peak, by referring to the SOC at a peak of the differential characteristics of the HF vs. SOC initial characteristics stored in the storage220. At this time, the battery state estimator210calculates, based on the charge current and the charge time, the charge capacity from the start of charge to the time of detection of the peak, and calculates a SOC(OCV) based on the calculated charge capacity and the start SOC estimated from the table maps of the OCV vs. SOC characteristics at the start of charge as described above. For example, according to the following formulas, the battery state estimator210calculates a ΔSOC corresponding to the calculated charge capacity Q(t), and calculates the SOC(OCV) from the calculated ΔSOC and the start SOC estimated from the table maps of the OCV vs. SOC characteristics at the time of start of charge. SOC(OCV)=startSOC+ΔSOC; ΔSOC=Q(t)/C0(t); and C0(t)=C0×SOH, whereinC0(t) is a present overall capacity,C0 is an initial overall capacity,SOH is a state of health, andt is an elapsed time Here, if the SOH is accurately estimated, the ΔSOC is also considered to be accurately estimated, and the SOC(OCV) reflects a deviation of the start SOC estimated from the table maps of the OCV vs. SOC characteristics. The SOH can be determined by the following method, for example. For example, the battery state estimator210estimates a state of health SOH of the battery cells based on the measured HF vs. SOC present characteristics shown inFIG.3B or3Cand the previously stored HF vs. SOC initial characteristics shown inFIG.3A.FIG.3Dshows, in a superimposed manner, the HF vs. SOC initial characteristics shown inFIG.3A, the HF vs. SOC characteristics in the low degradation state shown inFIG.3B, and the HF vs. SOC characteristics in the intermediate degradation state shown inFIG.3C. Here, the lengths of line segments between the peaks of the HF vs. SOC differential characteristics shown inFIG.4correlate with the overall capacity of the battery cells in the case of a lithium-ion battery of the type of negative electrode cut. Therefore, if the lengths mAh of the line segments between these peaks are known, the overall capacity mAh of the battery cells can be calculated. For example, it is assumed that the initial capacity for SOC 0%-100% is 100 mAh, and the length of the line segment between arbitrary two peaks among the plurality of peaks is 20 mAh. The length mAh of the line segment between the peaks can be suitably calculated from the charge current and the charge time. When the degradation of the battery cells progresses and the length of the line segment between the two peaks decreases to 10 mAh, the overall capacity of the battery cells becomes equal to 50 mAh. Therefore, the battery state estimator210stores in advance the lengths mAh of the line segments between the peaks of the differential characteristics of the HF vs. SOC initial characteristics. At the time of charge, the battery state estimator210measures the length mAh of the line segment between arbitrary two peaks of the differential characteristics of the HF vs. SOC present (degraded state) characteristics, and estimates a SOH according to the following formula. SOH={lengthmAhof line segment between two peaks of differential characteristics ofHFvs.SOCpresent (degraded state) characteristics}/{lengthmAhof line segment between two peaks of differential characteristics ofHFvs.SOCinitial characteristics} Since charge is carried out with a constant current and at a low rate during actual use, the length mAh of the line segment between the peaks can be calculated from the charge current mA and the charge time h. This estimation does not require charge to be carried out from a SOC of 0% or up to a SOC of 100%, and therefore, makes it possible to estimate a SOH at the time of charge during actual use. As described above, in the case where the SOC(OCV) calculated from the table maps of the OCV vs. SOC characteristics deviates with respect to the SOC(HF) determined based on the peak of the HF vs. SOC differential characteristics by an amount of deviation equal to or greater than a predetermined value, the battery state estimator210corrects the table maps of the OCV vs. SOC characteristics based on the amount of deviation. The correction can be suitably made by, for example, shifting the entire contents of the table maps uniformly (e.g., by the amount of deviation (SOC %)) in a lateral direction. As described above, according to the SOC estimation correction of the present embodiment, SOC estimation for battery cells is corrected based on the HF vs. SOC differential characteristics, instead of the CCV vs. SOC differential characteristics. As described above, in comparison with the CCV vs. SOC differential characteristics, the HF vs. SOC differential characteristics have the feature in which:peaks have larger magnitudes, shaper spectra, lager S/N ratios, and maintain these characteristics even when the battery cells are degraded;the number of the peaks is greater and the intervals between the peaks are shorter; andplus peaks and minus peaks have a respective specific pattern, and even when the battery cells are degraded, the specific pattern maintains a certain pattern, in other words, the peaks are unlikely to deviate in position relative to the SOC even when the battery cells are degraded. As a result, the SOC estimation for the battery cells can be corrected with improved accuracy, and the SOC can be estimated with improved accuracy. In particular, the HF vs. SOC differential characteristics, in which a large number of peaks are present with short intervals interposed therebetween, allow correction of the SOC estimation even at the time of charge from various states of charge during actual use and even at the time of short-time charge during actual use. Further, the HF vs. SOC differential characteristics, in which the plus peaks and minus peaks have a respective specific pattern, allow the position (SOC) of the peak of the HF vs. SOC differential characteristics to be easily identified based on whether the peak is plus or minus and the pattern of magnitude of the peak even at the time of charge from various states of charge during actual use, thereby facilitating correction of the SOC estimation. Here, for actual use, a table map of an end-of-life (EOL) state is prepared as the table map of OCV vs. SOC characteristics in some cases. Alternatively, for actual use, a plurality of table maps are prepared according to degradation states, such as a begin-of-life (BOL) state, an end-of-life (EOL) state, and an intermediate degradation state therebetween, as the table maps of the OCV vs. SOC characteristics in some cases. In this respect, according to the correction of the SOC estimation of the present embodiment, since the table maps of the OCV vs. SOC characteristics can be corrected according to degradation, for example, only the table map of the initial state needs to be prepared. With respect to the estimation of the SOC and SOH, it is generally known to use a technique according to which a mathematical model of a battery cell is constructed and the SOC and SOH are estimated using a state estimator created from the model. This estimation involves a fundamental problem in that when the battery model contains an error, the estimation accuracy decreases. Further, it cannot be determined whether or not the estimated value is correct while a battery is being used. In this regard, the embodiment of the present disclosure, according to which the HF characteristics are measured, makes it possible to correct an error contained in a mathematical model of a battery while the battery is being used. (SOC Estimation Correction 2) Next, described is an example of correction of SOC estimation for the battery cells111that the battery state estimator210performs based on an enthalpy potential of the battery cells111, specifically, the UH vs. SOC characteristics, and more specifically, the UH vs. SOC differential characteristics. When charge and discharge are not being performed, for example, when the vehicle is at standstill during actual use, the battery state estimator210periodically estimates a SOC of the battery cells111that corresponds to a detected OCV of the battery cells111, by referring to the table maps of the OCV vs. SOC characteristics. At the time of a start of charge that is performed, for example, when the vehicle is at standstill during actual use, the battery state estimator210estimates, as a start SOC, a SOC of the battery cells111that corresponds to a detected OCV of the battery cells111, by referring to the table maps of the OCV vs. SOC characteristics. Thereafter, during the charge that is performed, for example, when the vehicle is at standstill during actual use, the battery state estimator210measures UH vs. SOC present characteristics of the battery cells111as illustrated inFIGS.3B and3C, and detects peaks of differential characteristics of the measured UH vs. SOC present characteristics. Here,FIG.5is a graph corresponding to the present embodiment, and shows, as an example, the UH vs. SOC characteristics and the differential characteristics thereof, i.e., differential characteristics d(UH)/d(SOC) of UH characteristics UH=f(SOC) with respect to the SOC. As shown inFIG.5, in the case of, for example, a lithium-ion battery including graphite as a material for the negative electrode and NCM as a material for the positive electrode, the UH vs. SOC differential characteristics have a plurality of peaks1to7caused by, for example, phase transition of the graphite. The positions (states of charge) of these peaks are unlikely to deviate even when the battery cells are degraded. The peaks7,6,5,4,3,2, and1are increasingly unlikely to deviate, in this order. Thus, the battery state estimator210determines, as a SOC(UH), the SOC at the detected peak, by referring to the SOC at a peak of the differential characteristics of the UH vs. SOC initial characteristics stored in the storage220. At this time, the battery state estimator210calculates, based on the charge current and the charge time, the charge capacity from the start of charge to the time of detection of the peak, and calculates a SOC(OCV) based on the calculated charge capacity and the start SOC estimated from the table map of the OCV vs. SOC characteristics at the start of charge as described above. For example, according to the following formulas, the battery state estimator210calculates a ΔSOC corresponding to the calculated charge capacity Q(t), and calculates the SOC(OCV) from the calculated ΔSOC and the start SOC estimated from the table maps of the OCV vs. SOC characteristics at the time of start of charge. SOC(OCV)=startSOC+ΔSOC; ΔSOC=Q(t)/C0(t); and C0(t)=C0×SOH, whereinC0(t) is a present overall capacity,C0 is an initial overall capacity,SOH is a state of health, andt is an elapsed time Here, if the SOH is accurately estimated, the ΔSOC is also considered to be accurately estimated, and the SOC(OCV) reflects a deviation of the start SOC estimated from the table maps of the OCV vs. SOC characteristics. The SOH can be determined by the following method, for example. For example, the battery state estimator210estimates a state of health SOH of the battery cells based on the measured UH vs. SOC present characteristics shown inFIG.3B or3Cand the previously stored UH vs. SOC initial characteristics shown inFIG.3A. Here, the lengths of line segments between the peaks of the UH vs. SOC differential characteristics shown inFIG.5correlate with the overall capacity of the battery cells in the case of a lithium-ion battery of the type of negative electrode cut. Therefore, if the lengths mAh of the line segments between these peaks are known, the overall capacity mAh of the battery cells can be calculated. For example, it is assumed that the initial capacity for SOC 0%-100% is 100 mAh, and the length of the line segment between arbitrary two peaks among the plurality of peaks is 20 mAh. The length mAh of the line segment between the peaks can be suitably calculated from the charge current and the charge time. When the degradation of the battery cells progresses and the length of the line segment between the two peaks decreases to 10 mAh, the overall capacity of the battery cells becomes equal to 50 mAh. Therefore, the battery state estimator210stores in advance the lengths mAh of the line segments between the peaks of the differential characteristics of the UH vs. SOC initial characteristics. At the time of charge, the battery state estimator210measures the length mAh of the line segment between arbitrary two peaks of the differential characteristics of the UH vs. SOC present (degraded state) characteristics, and estimates a SOH according to the following formula. SOH={lengthmAhof line segment between two peaks of differential characteristics ofUHvs.SOCpresent (degraded state) characteristics}/{lengthmAhof line segment between two peaks of differential characteristics ofUHvs.SOCinitial characteristics} Since charge is carried out with a constant current and at a low rate during actual use, the length mAh of the line segment between the peaks can be calculated from the charge current mA and the charge time h. This estimation does not require charge to be carried out from a SOC of 0% or up to a SOC of 100%, and therefore, makes it possible to estimate a SOH at the time of charge during actual use. As described above, in the case where the SOC(OCV) calculated from the table maps of the OCV vs. SOC characteristics deviates with respect to the SOC(UH) determined based on the peak of the UH vs. SOC differential characteristics by an amount of deviation equal to or greater than a predetermined value, the battery state estimator210corrects the table maps of the OCV vs. SOC characteristics based on the amount of deviation. The correction can be suitably made by, for example, shifting the entire contents of the table map uniformly (e.g., by the amount of deviation (SOC %)) in a lateral direction. As described above, according to the SOC estimation correction of the present embodiment, SOC estimation for battery cells is corrected based on the UH vs. SOC differential characteristics, instead of the CCV vs. SOC differential characteristics. As described above, in comparison with the CCV vs. SOC differential characteristics, the UH vs. SOC differential characteristics also have the feature in which:peaks have larger magnitudes, shaper spectra, lager S/N ratios, and maintain these characteristics even when the battery cells are degraded;the number of the peaks is greater and the intervals between the peaks are shorter; andplus peaks and minus peaks have a respective specific pattern, and even when the battery cells are degraded, the specific pattern maintains a certain pattern, in other words, the peaks are unlikely to deviate in position relative to the SOC even when the battery cells are degraded. As a result, the SOC estimation for the battery cells can be corrected with improved accuracy, and the SOC can be estimated with improved accuracy. In particular, the UH vs. SOC differential characteristics, in which a large number of peaks are present with short intervals interposed therebetween, allow correction of the SOC estimation even at the time of charge from various states of charge during actual use and even at the time of short-time charge during actual use. Further, the UH vs. SOC differential characteristics, in which the plus peaks and minus peaks have a respective specific pattern, allow the position (SOC) of the peak of the UH vs. SOC differential characteristics to be easily identified based on whether the peak is plus or minus and the pattern of magnitude of the peak even at the time of charge from various states of charge during actual use, thereby facilitating correction of the SOC estimation. As described above, for actual use, a table map of an end-of-life (EOL) state is prepared as the table map of OCV vs. SOC characteristics in some cases. Alternatively, for actual use, a plurality of table maps are prepared according to degradation states, such as a begin-of-life (BOL) state, an end-of-life (EOL) state, and an intermediate degradation state therebetween, as the table maps of the OCV vs. SOC characteristics in some cases. In this respect, according to the correction of the SOC estimation of the present embodiment, since the table maps of the OCV vs. SOC characteristics can be corrected according to degradation, for example, only the table map of the initial state needs to be prepared. While embodiments of the present disclosure have been described above, the present disclosure is not limited to the embodiments described above, and various modifications and changes may be made to the present disclosure. For example, in the case of SOC Estimation 1 described above, table maps of CCV vs. SOC characteristics may be corrected based on a heat flow of the battery cells, specifically HF vs. SOC characteristics, and more specifically HF vs. SOC differential characteristics. More specifically, at the time of charge that is performed, for example, when the vehicle is at standstill during actual use, the battery state estimator210measures HF vs. SOC present characteristics of the battery cells111as illustrated inFIGS.3B and3C, and detects peaks of differential characteristics of the measured HF vs. SOC present characteristics, in the same manner as described above. The battery state estimator210determines, as a SOC(HF), the SOC at the detected peak, by referring to the SOC at a peak of the differential characteristics of the HF vs. SOC initial characteristics stored in the storage220. The battery state estimator210determines, as a SOC(CCV), a SOC corresponding to the CCV detected at the time of detection of the peak described above, by referring to one of the stored table maps of CCV vs. SOC characteristics that corresponds to a temperature and a current detected at the time of detection of the peak described above. As described above, in the case where the SOC(CCV) calculated from the table maps of the CCV vs. SOC characteristics deviates with respect to the above-described SOC(HF) determined based on the peak of the HF vs. SOC differential characteristics by an amount of deviation equal to or greater than a predetermined value, the battery state estimator210corrects the table maps of the CCV vs. SOC characteristics based on the amount of deviation. The correction can be suitably made by, for example, shifting the entire contents of the table map uniformly (e.g., by the amount of deviation (SOC %)) in a lateral direction. For actual use, a table map of an end-of-life (EOL) state is prepared as the table map of CCV vs. SOC characteristics in some cases. Alternatively, for actual use, a plurality of table maps are prepared according to degradation states, such as a begin-of-life (BOL) state, an end-of-life (EOL) state, and an intermediate degradation state therebetween, as the table maps of the CCV vs. SOC characteristics in some cases. In this respect, according to the correction of the SOC estimation of the present modification, since the table maps of the CCV vs. SOC characteristics can be corrected according to degradation, for example, only the table map of the initial state needs to be prepared. Moreover, in the case of SOC Estimation 2 described above, table maps of CCV vs. SOC characteristics may be corrected based on an enthalpy potential of the battery cells, specifically UH vs. SOC characteristics, and more specifically UH vs. SOC differential characteristics. More specifically, at the time of charge that is performed, for example, when the vehicle is at standstill during actual use, the battery state estimator210measures UH vs. SOC present characteristics of the battery cells111as illustrated inFIGS.3B and3C, and detects peaks of differential characteristics of the measured UH vs. SOC present characteristics, in the same manner as described above. The battery state estimator210determines, as a SOC(UH), the SOC at the detected peak, by referring to the SOC at a peak of the differential characteristics of the UH vs. SOC initial characteristics stored in the storage220. The battery state estimator210determines, as a SOC(CCV), a SOC corresponding to the CCV detected at the time of detection of the peak described above, by referring to one of the stored table maps of CCV vs. SOC characteristics that corresponds to a temperature and a current detected at the time of detection of the peak described above. As described above, in the case where the SOC(CCV) calculated from the table maps of the CCV vs. SOC characteristics deviates with respect to the above-described SOC(UH) determined based on the peak of the UH vs. SOC differential characteristics by an amount of deviation equal to or greater than a predetermined value, the battery state estimator210corrects the table maps of the CCV vs. SOC characteristics based on the amount of deviation. The correction can be suitably made by, for example, shifting the entire contents of the table map uniformly (e.g., by the amount of deviation (SOC %)) in a lateral direction. As described above, for actual use, a table map of an end-of-life (EOL) state is prepared as the table map of CCV vs. SOC characteristics in some cases. Alternatively, for actual use, a plurality of table maps are prepared according to degradation states, such as a begin-of-life (BOL) state, an end-of-life (EOL) state, and an intermediate degradation state therebetween, as the table maps of the CCV vs. SOC characteristics in some cases. In this respect, according to the correction of the SOC estimation of the present modification, since the table map of the CCV vs. SOC characteristics can be corrected according to degradation, for example, only the table map of the initial state needs to be prepared. EXPLANATION OF REFERENCE NUMERALS 100: Battery unit101: Case102: Cover103: Lower frame104: Upper frame105: Cooling plate106: Air introduction mechanism110: Battery module111: Battery cell112: Stack113: End plate114: Cell bus bar119: Module bus bar120: Battery heat flow detector130: Reference heat flow detector141: Voltage detector142: Current detector143: Temperature detector200: Battery management system (BMS)210: Battery state estimator220Storage
48,626
11860234
Like references and/or reference symbols in the various drawings indicate like elements. DETAILED DESCRIPTION While historically commercial battery testers were large and wheeled from car to car for battery testing, in the past several years battery testers (e.g., for vehicles such as cars and boats) have been produced that are smaller in scale. However, as the size of the battery tester has decreased, so has the incidence of loss. Smaller battery testers may be more easily misplaced, dropped, and/or damaged when misplaced (e.g., left where a car and/or repair equipment may roll over the battery tester). A magnetic battery tester housing may increase user satisfaction by decreasing incidents of loss, increasing ease of use (e.g., since a user may couple the tester close by) and/or decrease incidental damage (e.g., due leaving a battery tester in a place it may be damaged such as the floor, near a car wheel, etc.). In various implementations, a battery tester may include housing that is capable of receiving components (e.g., electronic and/or mechanical components) that are capable of testing the strength of a battery. The battery tester may include cables and/or clamps that contact the battery (e.g., leads of the battery) and operate with the other components to test the strength of a battery (e.g., of a vehicle). The housing may have one or more features that may facilitate operation of the battery tester, inhibit damage (e.g., by inhibiting dropping and/or slippage), and/or increase user satisfaction. FIG.1Aillustrates a top view of an implementation of a battery tester andFIG.1Billustrates a side perspective view of an implementation of a battery tester.FIGS.1A-Bare illustrated with a partially transparent top.FIG.1Cillustrates an exploded view of a simplified battery tester (e.g., components to test the battery may be simplified and/or vary) andFIG.1Dillustrates a portion the back part of the battery tester. The battery tester housing may be magnetic. Thus, the battery tester may be coupled (e.g., removably) to a magnetic surface. By allowing the battery tester to be coupled to the magnetic surface, as desired by a user, loss and/or damage to the battery tester or components thereof may be decreased and user satisfaction with the device may be increased. For example, the battery tester may be coupled to magnetic surfaces such as, but not limited to, vehicles or portions thereof, poles (e.g., in shops), carts, tool boxes, wearables (e.g., belts with magnetic surfaces, vests with magnetic surfaces, etc.), magnetic boards (e.g., white boards, peg boards, and/or chalk boards), other vehicle testing equipment, vehicle repair equipment, and/or any other appropriate magnetic surface. The magnetic battery tester may have a housing that is magnetic on one side and not magnetic on at least one other side (e.g., such that a predetermined side is visible and/or hidden to protect a predetermined side). As illustrated, the battery tester100may include a housing110. The housing may include a body with a length101, a height102, and a width103. The body may have a first side111, a second opposing side112, a third side113disposed between the first side and the second side, and a fourth side114disposed between the first side and the second side. The housing may include a body with at least a first part120(e.g., proximate a front of the battery tester) and a second part130(e.g., proximate the back of the battery tester). A third part140may be disposed between the first part120and the second part150. The third part may at least partially circumscribe the housing of the battery tester as illustrated. The third part may be decorative (e.g., a stripe at least partially circumscribing the housing) and/or a functional. The third part140may be a gasket in some implementations. The third part may be integral with the first part120or the second part130or not included in some implementations. The first part120and the second part130may couple together (e.g., using any appropriate coupling members such as adhesive and/or fasteners107) and a cavity150may be disposed between the first part and the second part. The first part and/or the second part may include one or more chamfered corners. The first part and/or the second part may include one or more beveled edges. Chamfered corners and/or beveled edges may be decorative and/or functional. For example, the chamfered corners and beveled edges may be less prone to damage upon dropping (e.g., since sharp corners may not be bent upon dropping); may reduce the incidence of sharp corners of the housing causing damage to other people and/or items; etc. In some implementations, the housing may include features to facilitate grip and inhibit dropping during use. For example, the length of the housing may be greater proximate the first side and the second side of the housing than at least a portion of the housing between the first side and the second side. The smaller portion between the first side and the second side may act as a gripping area to allow a user to more easily grip and/or retain the housing during use (e.g., as opposed to a housing with a more uniform cross-section along the height between the first side and the second side). A first part120of the housing110of the battery tester100may include opening(s) and/or recess(es)122for presentation component(s)123and/or input component(s)124. The presentation component may include a screen (e.g., touch screen and/or LCD screen), lighting components (e.g., LEDs), etc. An input device may include a keypad (e.g., arrow, enter, delete, alphanumeric, etc.), touch screen, etc. The presentation component and the input components may be separate pieces, unibody (e.g., a touchscreen capable of presenting information and/or allowing input of information), and/or pieces coupled together. In some implementations, a keypad may include arrows to allow use while wearing gloves (e.g., since many gloves may inhibit touchscreen operation; since accurate touching on touch screens may be difficult while wearing work gloves, and/or since fingers may be less flexible when wearing gloves). The keypad may be disposed below the screen to allow a user to view the screen while activating the keypad. The first piece of the housing may include a recess to receive a label125(e.g., including device name, brand, etc.), in some implementations. The second part130of the housing110of the battery tester may include an inner side131and an opposing outer side132. The inner side131of the second part130may include one or more magnet receiving members such as one or more magnet recesses132and/or magnet protrusions. A magnet receiving member may help retain a magnet in a closed housing (e.g., a lip of a magnet receiving member may inhibit a magnet from dislodging from a predetermined position) even when a bonding, between the magnet and the second part, such as adhesive, fails. Positioning the magnet in the cavity of the housing may extend the life of the magnetic battery tester since the magnet may still reside in the housing and be capable of coupling with magnetic surfaces even if the magnet becomes dislodged from an initial position. A magnet recess132may be configured to receive at least a portion of a magnet134(e.g., neodymium magnet). In some implementations, the body may include magnet protrusion(s) (e.g., a raised annular ring to receive magnet(s); protrusions that at least partially define a region in which a magnet is to be disposed; etc.) that are configured to receive at least a portion of a magnet. The magnet recess(es) and/or magnet protrusion(s) may define a region in which a magnet is to be disposed. The magnet recess(es) and/or magnet protrusion(s) may have a similar shape as magnet(s) to be disposed in the magnet recess(es) and/or magnet protrusion(s). The magnet recess(es) and/or magnet protrusion(s) may be larger in cross-sectional area (e.g., slightly larger, such as approximately 5% larger than a magnet size; and/or any appropriate size larger) than a cross-sectional area of a magnet, in some implementations. In some implementations, the housing may not include magnet receiving members. The magnets may be coupled to an inner surface of the second part of the housing, in some implementations. One or more magnets may be disposed in the second part of the housing to allow the housing to be magnetic and couple with magnetic surfaces (e.g., a surface to which a magnet is capable of coupling such as a surface that includes another magnet with the appropriate polarity, steel, iron, iron alloys, nickel, nickel alloys, cobalt, cobalt alloys, etc.). Magnet(s) may include neodymium magnets and/or any other appropriate type of magnet. In some implementations, at least 2 magnets may be disposed in the second part130of the housing110. A first magnet may be disposed more proximate a third side than a second magnet in some implementations. In some implementations, four magnets may be disposed in the second part130of the housing110. As illustrated, a first magnet and a second magnet may be more proximate the first side of the housing than the third magnet and the fourth magnet. The first magnet and the third magnet may be disposed more proximate a third side of the housing than the second magnet and the fourth magnet. A magnet may be a single magnet or multiple magnets (e.g., stacked). In some implementations, the second part130of the housing110may include one magnet. For example, the magnet may have a shape similar to a plate. The plate magnet may be disposed approximately centrally between the third side and the fourth side of the second part of the housing. In various implementations, the magnets may be coupled to the battery tester housing and/or disposed in the battery tester housing. For example, the magnets may be disposed in the magnet recesses and retained by contact with other components of the housing (e.g., when the first and second part are coupled together). As another example, the magnets may be coupled by any appropriate manner, such as bonding, gluing, affixing, protrusions (e.g., flexible arms) that extend to retain the magnet, cover plate(s), etc. The number, type and/or size of magnet(s) disposed in the housing may be based on the size and/or weight of the battery tester or portions thereof (e.g., housing, cables, etc.) For example, for a battery tester housing less than approximately 10 inches in height and less than approximately 5 inches in length, four neodymium magnets may be utilized. As another example, the size, shape, and/or number of magnets may be selected such that the weight of the cables and clamps extending from the battery tester do not cause the magnetic battery tester to slide from a first position to a second position (e.g., more than approximately 5 inches away) and/or uncouple from a magnetic surface. In some implementations, the position(s) of the magnet(s) may be selected such that the magnets are capable of coupling with a magnetic surface contacting the outer surface of the second part of the housing while not interfering with operations of the battery tester (e.g., magnetic interference with communication components, sensors, etc.). The magnets may be less than approximately 20 mm and/or less than approximately 16 mm from a communication component (e.g., Bluetooth), in some implementations. The magnet(s) may be disposed less than approximately 3 mm or less than approximately 2.5 mm from the back outer surface. The magnet(s) may not interfere with operations of the programmable logic disposed in the cavity. In some implementations, a shield may not be utilized between the programmable logic and the magnets. In some implementations, at least a portion of the outer surface of the second part may include a material, texture, and/or feature to facilitate retention of the magnetic battery tester on a magnetic surface (e.g., work cart, vehicle, pole, I-beam, etc.). For example, at least a portion may be rubberized. The rubberized outer surface or portion thereof may aid retention of the magnetic battery tester at a position on a magnetized surface and/or inhibit damage to the magnetic surface to which the battery tester couples. As another example, at least a portion of the second part may have greater friction than at the first part such that sliding of the second part is inhibited when coupled to a magnetic surface (e.g., the potential friction for static friction of at least a portion of the second part is greater than the potential friction for static friction of the first part). For example, the plastic may have a tackiness, in some implementations. Thus, the second part or portion thereof (e.g., rubberized portion) and the magnets may retain the battery tester at approximately a first position (e.g., the position to which it was initially coupled) rather sliding from a first position to a second position. The use of a frictionally retaining outer surface or portion thereof may allow the use of weaker magnets than if a smooth or slippery outer surface was utilized, which may increase user enjoyment of the device (e.g., since it may not be difficult to remove the magnetic battery tester from a magnetized surface if a frictionally retaining outer surface is used rather than a stronger magnet to retain a battery tester in a first position). The outer surface may include labels, instructions, and/or other information on a panel138. In some implementations, the second part of the housing may include one or more offset posts136extending from the second part. Offset posts136may include a rubberized portion (e.g., to inhibit scratching and/or damage of magnetic surfaces to which the battery tester couple) and act as the rubberized portion of the outer surface of the second part to retain a couple battery tester at approximately a first position. In some implementations, the height of the offset posts136may increase the distance the magnetic forces need to travel to interact with a magnetic surface and the strength and/or number of magnets may be increased. The cavity140of the battery tester housing may include one or more components of the battery tester to facilitate testing the strength of the battery. For example, components such as lighting a programmable logic component180(e.g., printed circuit board (PCB)), battery (e.g., to operate lights and programmable logic component), communication component(s) (e.g., Bluetooth, Wireless, etc.), and/or any other appropriate component may be disposed in the cavity of the housing110. The battery tester may include lighting components184at least partially disposed in the cavity. The lighting components184may extend through orifices127in the first part of the housing and/or be visible through the first part of the housing. The lighting components may provide signals to a user, such as a signal related to a strength of the battery (e.g., low, needs charging, good strength). The battery tester may include clamps coupled via cables160to the housing. The housing110may include ports170through which cables at least partially pass through to couple with other components of the battery tester (e.g., components to facilitate battery strength testing). A first end of the cables may be coupled to one or more components in the housing and the second end of the cables may be coupled to clamps to allow battery testing. The clamps may couple with and/or contact a portion of the battery (e.g., leads of the battery) to allow other components of the battery tester to determine the strength of the battery. In some implementations, wireless clamps may be utilized. For example, the housing may be capable of coupling with a set of clamps via a communication interface to operate together to test a strength of a battery. In some implementations, the battery tester may include holsters, in which clamps (e.g., for coupling with a battery to be tested) may be disposed. The holsters may be disposed, in some implementations, on the second part of the housing and/or on sides of the housing. The holster(s) may be single piece, in some implementations. The holster(s) may be removable members. For example, an outer surface of the second part may include one or more holster recesses that are capable of receiving protrusion(s) disposed on an outer surface of one or more holsters to couple the holster and the second part. The holster cup(s) may be removable and/or replaceable in some implementations. Thus, as the holster cup(s) break (e.g., from dropping, repetitive stress due to insertion of the clamps, etc.), holster cup(s) may be replaced. The battery tester may be disposable in a base. The base may have any appropriate size and/or shape. The base may or may not include arms (e.g., to ease gripping the base). The base may be couplable to various surfaces such as walls. In some implementations, the battery tester may include any appropriate communication interfaces to communicate with one or more other computing devices to determine a strength of a battery, present a strength of a battery, etc. Although a particular shape and configuration of the housing has been illustrated, other shapes and/or configurations may be utilized, as appropriate. The housing of the battery tester may be any appropriate shape and/or size. The housing may include any appropriate material and/or any appropriate opacity. The hardness of at least a portion of the housing may be selected to resist wear and/or damage from accidental drops. The surface hardness of at least a portion the housing may be at least approximately 65. The housing may include one or more chamfered and/or beveled corners. In some implementations, one or more edges may be beveled. As illustrated, the chamfer and/or bevel may be similar and/or complementary on the first and the second parts of the housing. In various implementations, the housing may be configured to facilitate holding the battery tester in a hand during use and/or transport. As illustrated, the first side and/or second side of the housing may be larger than an area between the first side and the second side. In some implementations, the outer surface of the second part may include a raised surface139between the third side and the fourth side. The raised surface may more easily fit into a curved hand while a user holds the device than an approximately planar outer surface of a second part of the housing. The magnets may be disposed in the portion of the cavity corresponding to the raised surface to allow weaker and/or less magnets to be used than if the magnets were disposed in the surfaces adjacent the raised surface. In some implementations, the raised surface may be proximate the first side and/or may not extend to the second side of the housing. In some implementations, the raised surface may be used instead of sticky feet, straps, and/or tacky housing material. The raised surface may facilitate gripping of the housing as well as and/or better than these options. Additionally, unlike tacky surfaces, feet, and/or straps, the raised surface may be more durable (e.g., since it does not need to be replaced like lost feet, broken straps; and/or washed to restore tackiness, etc.). As illustrated,FIGS.1A-1Cillustrate an implementation of an example battery tester system. As illustrated the battery tester system may include a battery tester and a base in which the battery tester may reside (e.g., during use and/or when not in use). FIGS.2A-2Sillustrate an implementation of an example battery tester and portions thereof. As illustrated, a camera and/or other type of sensor may be disposed on the second opposing side of the battery tester. The camera and/or other type of sensor may be disposed in any appropriate position on the battery tester. The camera and/or other type of sensor may be disposed between the holster cups. In some implementations, the camera and/or other type of sensor may not be inhibited from obtaining images and/or other readings when the clamps are disposed in the holster. During use, since the clamps may be coupled to the battery and may not be disposed in the holster, the clamps and/or cabling from the wire may not inhibit image capture and/or sensor readings (e.g., since the cabling may exit the housing of the battery testing proximate a bottom of the tester and/or below the holster). The position of the camera and/or other type of sensor on the battery tester between the holster may be selected such that when a user is holding the battery tester the camera and/or sensor may not be blocked by fingers and/or palms that hold the battery tester (e.g., since the user may hold the battery tester in the middle and/or below the holster). Thus, inadvertently erroneous readings may be inhibited via placement between the holster. FIGS.2A-2Nillustrate an implementation of an example battery tester housing and portions thereof. The housing may include two or more parts. In some implementations, the housing includes magnets to operate as a magnetic battery tester housing as described herein. FIGS.3A-3Pan implementation of an example battery tester housing and portions thereof. The housing may include two or more parts. In some implementations, the housing includes magnets to operate as a magnetic battery tester housing as described herein. FIGS.4A-4Gillustrate an implementation of an example battery tester housing and portions thereof. The housing may include two or more parts. In some implementations, the housing includes magnets to operate as a magnetic battery tester housing as described herein. FIGS.5A-5Oillustrate an implementation of an example battery tester housing and portions thereof. The housing may include two or more parts. In some implementations, the housing includes magnets to operate as a magnetic battery tester housing as described herein. As illustrated inFIG.5H, the holsters for the clamps may be removable. The outer surface of the second part of the housing may include holster recesses and the outer surface of the holsters may include protrusions receivable by the holster recesses of the housing. The holsters may be coupled by sliding and/or snapping the holster onto the outer surface of the second part of the housing. In some implementations, the magnet(s) and/or magnet receiving members(s) may be disposed on an inner surface of the first side. For example, the magnet(s) may be coupled to an inner surface of the first side such that at least a portion of the outer surface of the first part of the housing contacts the magnetic surface when coupled to the magnetic surface. Coupling the first part may protect the “face” of the battery tester (e.g., presentation interface such as lighting components and/or screen; input device; etc.) when not in use. In some implementations the magnet(s) and/or magnet receiving member(s) may be disposed on an outer surface of the first side and/or second side of the housing. Positioning the magnets on an outer surface may facilitate replacement of magnets and/or facilitate coupling of the battery tester on a magnetic surface since the magnetic region may be visible. Although the first part and the second part are described and illustrated as separate pieces, the first part and the second part may be joined (e.g., clamshell arrangement). Although the first part and the second part are described and illustrated as unibody pieces the first and/or second parts may include one or more segments that form the first part and/or second part. Described process(es) may be implemented by various systems, such as the systems described herein. In addition, various operations may be added, deleted, and/or modified. In some implementations, operations of the process(es) may be performed in combination with other described processes or portions thereof. As described herein, terms describing position such as front, back, top, bottom are relative terminology used to designate a side from another side. The term describing position may or may not correspond to an orientation to a user during use. It is to be understood the implementations are not limited to particular systems or processes described which may, of course, vary. It is also to be understood that the terminology used herein is for the purpose of describing particular implementations only, and is not intended to be limiting. As used in this specification, the singular forms “a”, “an” and “the” include plural referents unless the content clearly indicates otherwise. Thus, for example, reference to “a holster” includes a combination of two or more holsters and reference to “a keypad” includes different types and/or combinations of keypads. Although the present disclosure has been described in detail, it should be understood that various changes, substitutions and alterations may be made herein without departing from the spirit and scope of the disclosure as defined by the appended claims. Moreover, the scope of the present application is not intended to be limited to the particular embodiments of the process, machine, manufacture, composition of matter, means, methods and steps described in the specification. As one of ordinary skill in the art will readily appreciate from the disclosure, processes, machines, manufacture, compositions of matter, means, methods, or steps, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized according to the present disclosure. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.
25,778
11860235
DETAILED DESCRIPTION First Embodiment An embodiment of the battery state estimation apparatus will be described with reference toFIGS.1to5. As shown inFIG.1, a battery state estimation apparatus1according to the present embodiment includes an electrical information acquisition unit11, a change amount calculation unit12, a resistance value calculation unit13, a temperature acquisition unit14, a constant calculation unit15, and a convergence determination unit16. The electrical information acquisition unit11acquires a current value and a voltage value of a secondary battery2by measurement. The change amount calculation unit12calculates a current change amount ΔI and a voltage change amount ΔV of the secondary battery2in a predetermined period by using the current values and the voltage values acquired by the electric information acquisition unit11. The resistance value calculation unit13calculates a measured DC resistance value based on the current change amount ΔI and the voltage change amount ΔV. The temperature acquisition unit14acquires the temperature of the secondary battery2. The constant calculation unit15calculates (i) a non-temperature dependent constant Ra and (ii) a constant of a temperature dependent function Rb (“A” described later), based on (i) the estimated DC resistance value Rdc′ represented by the sum of the non-temperature dependent constant Ra and the temperature dependent function Rb and (ii) the measured DC resistance value Rdc acquired by the resistance value calculation unit13. The non-temperature dependent constant Ra is a temperature-independent component of the DC resistance of the secondary battery2. The temperature dependent function Rb is a function that depends on the temperature in the DC resistance of the secondary battery2. The convergence determination unit16includes a function of determining whether or not the value calculated by the constant calculation unit15has converged, although the details will be described later. Hereinafter, the present embodiment will be described in detail. The battery state estimation apparatus1according to the present embodiment is used by being mounted on a vehicle such as an electric vehicle or a hybrid vehicle, together with the secondary battery2. The battery state estimation apparatus1may be incorporated in, for example, an engine ECU (electronic control unit). It is noted that the engine ECU, the battery state estimation apparatus1in the engine ECU, and the units11to16in the battery state estimation apparatus1and methods thereof may be implemented by one or more special-purpose computers. Such computers may be created (i) by configuring (a) a memory and a processor programmed to execute one or more particular functions embodied in computer programs, or (ii) by configuring (b) a processor provided by one or more special purpose hardware logic circuits, or (iii) by configuring a combination of (a) a memory and a processor programmed to execute one or more particular functions embodied in computer programs and (b) a processor provided by one or more special purpose hardware logic circuits. The computer programs may be stored, as instructions being executed by a computer, in a tangible non-transitory computer-readable storage medium. As shown inFIG.1, the secondary battery2is connected to an inverter3and a charging device4. The inverter3converts the DC power supplied from the secondary battery2into AC power and outputs the AC power to a three-phase AC motor (not shown). The secondary battery2includes a plurality of battery cells21connected in series with each other. Each battery cell21consists of, for example, a lithium-ion secondary battery. The positive electrode of the secondary battery2is made of a lithium transition metal oxide such as LiFePO4. In addition, the negative electrode of the secondary battery2is made of a negative electrode active substance that can occlude and release lithium ions such as graphite. The secondary battery2may be configured by connecting a plurality of battery cells21in parallel to each other to form a cell block, and connecting a plurality of these cell blocks in series to each other. A voltage sensor5and a current sensor6are connected to the secondary battery2. Information on the voltage sensor5and information on the current sensor6are transmitted to the electrical information acquisition unit11. In addition, a temperature sensor7for measuring the temperature of the secondary battery2is arranged in the vicinity of the secondary battery2. Information on the temperature sensor7is transmitted to the temperature acquisition unit14. As mentioned above, the secondary battery2is connected to the inverter3and the charging device4. A discharge switch81is provided between the secondary battery2and the inverter3. In addition, a charging switch82is provided between the secondary battery2and the charging device4. When the power is supplied from the secondary battery2to the inverter3, the discharge switch81is turned on. Also, when the secondary battery2is charged, the charging switch82is turned on. The on-off operation of the charging switch82and the discharging switch81is controlled by the ECU. Next, the method of estimating the DC resistance value of the deteriorated secondary battery2will be explained. The DC resistance of the secondary battery2is theoretically expressed by the sum of the non-temperature dependent constant Ra and the temperature dependent function Rb. Specifically, the DC resistance of the secondary battery2is expressed by the following expression. [Expression⁢(1)]Rd⁢c′(t)=Ra+Rb(t)=Ra+A×exp⁡(BT⁡(t))(1) Here, the (′) symbol means that it is estimated, not measured. The non-temperature dependent constant Ra in the above Expression (1) is a resistance of the secondary battery2that does not depend on temperature. The non-temperature dependent constant Ra includes non-temperature dependent resistors such as metal conductor resistance and bus bar contact resistance. The non-temperature dependent constant Ra increases with deterioration of the secondary battery2(increase in the elapsed years), for example, as schematically shown inFIG.2. The temperature dependent function Rb in the above Expression (1) is a DC resistance component expressed according to the Arrhenius equation, and depends on the temperature of the secondary battery2as shown inFIG.3.FIG.3schematically shows the transition of the temperature dependent function Rb at 0 degree C., 10 degrees C., and 25 degree C. Here, “A” in the temperature dependent function Rb is a constant in the temperature dependent function Rb, and its value increases as the secondary battery2deteriorates. Further, “B” in the temperature dependent function Rb is an eigenvalue determined by the materials that are included in the secondary battery2. Also, “T” in the temperature dependent function Rb is an absolute temperature of the secondary battery2, and it depends on the time t. Here, as shown inFIG.4, the measured value of the DC resistance of the secondary battery2can be expressed by the following Expression by the change ΔI of the current value in the measured specific period and the change ΔV of the voltage value in the specific period. [Expression⁢(2)]Rd⁢c(t)=Δ⁢V⁡(t)Δ⁢I⁡(t)(2) Here, ΔV(t) is expressed by ΔV(t)=V(t)−V(t−1). V(t−1) means the voltage value of the secondary battery2at the time t−1 when the voltage value was acquired immediately before V(t). Also, ΔI(t) is expressed by ΔI(t)=I(t)−I(t−1). I(t−1) means the voltage value of the secondary battery2at the time t−1 when the current value was acquired immediately before I(t). The resistance value calculation unit13calculates the measured DC resistance value Rdc(t) using the measured values of the current value and the voltage value of the secondary battery2acquired by the electrical information acquisition unit11. That is, in calculating the measured DC resistance value Rdc(t), for example, the measured current value and the measured voltage value are used, instead of the current value and voltage value of the secondary battery2after the averaging process. Then the constant calculation unit15calculates (i) the non-temperature dependent constant Ra and (ii) the constant A of the temperature dependent function Rb, in the estimated DC resistance value Rdc′, such that the cumulative error between the estimated DC resistance value Rdc′ expressed using (i) the measured value of the temperature of the secondary battery2and (ii) the measured DC resistance value Rdc is less than or equal to a predetermined value. Such a method for calculating these constants Ra and A may use a sequential least squares method, a least squares method, a Kalman filter, etc. Next, the process of estimating the constants Ra and A by the battery state estimation apparatus1will be described with reference to the flowchart shown inFIG.5. In step S1, the battery state estimation apparatus1sets i corresponding to the time axis to 0 (zero) and sets N corresponding to the number of calculations of A and Ra to 0 (zero). Then, in step S2, the current value I(0) and the voltage value V(0), which are initial values of the current value I(i) and the voltage value V(i), are acquired by the measurement by the current sensor6and the voltage sensor5. The measurement by the current sensor6and the voltage sensor5is performed repeatedly at regular time intervals. Furthermore, A(0) and Ra(0), which are initial values of A(N) and Ra(N), are acquired. Here, A(0) and Ra(0) may be values of A and Ra that were updated during the previous run. Suppose a case where there are none of values of A and Ra that were updated during the previous run (for example, when the system is started). Such a case employs, as A(0) and Ra(0), the values of A and Ra that are calculated in advance using the second battery before deterioration of the same type as the secondary battery2using the battery state estimation apparatus1. Next, in step S3, N is increased by one (1). Also, in step S4, i is increased by one (1). In step S5, the current value I(1), voltage value V(1), and temperature T(1) are acquired. Here, the current value I(1), voltage value V(1), and temperature T(1) are acquired in the measurement timing (that is, the timing of i=1) following the measurement timing of the current value I(0) and the voltage value V(0) (that is, the timing of step S2). Then, in step S6, ΔI(i) and ΔV(i) are calculated. As described above, ΔI(i) is the value obtained by subtracting the value of the current value I(i-1) acquired at the previous time from the current value I(i) acquired at the present time point. That is, ΔI(i)=I(i)−I(i-1). ΔV(i) is the value obtained by subtracting the value of the voltage value V(i-1) acquired at the previous time from the voltage value V(i) acquired at the present time point. That is, ΔV(i)=V(i)−V(i-1). Next, in step S7, it is determined whether or not the resistance value acquisition condition is satisfied. In the present embodiment, the resistance value acquisition condition means a condition in which both the following Expressions (3) and (4) are satisfied. [Expression⁢(3)⁢and⁢(4)]❘"\[LeftBracketingBar]"Δ⁢I⁡(t)❘"\[RightBracketingBar]"≥j(3)❘"\[LeftBracketingBar]"Δ⁢I⁡(t)❘"\[RightBracketingBar]"I⁡(t)≥l(4) In addition, j in the above Expression (3) and I in the Expression (4) are predetermined conforming values. By satisfying both Expressions (3) and (4), the measured DC resistance value Rdc(i) can be obtained with high accuracy. When at least one of Expressions (3) and (4) is not satisfied, steps S4to S7are repeated. Then, when it is determined in step S7that both Expressions (3) and (4) are satisfied, the process proceeds to the next step S8. Next, in step S8, the measured DC resistance value Rdc(i)=ΔV(i)/ΔI(i) is calculated from ΔI(i) and ΔV(i) calculated in step S6(the above Expression (2)). Next, in step S9, the constants Ra(N) and A(N) are obtained by the sequential least squares method based on exp(B/T(i)), Ra(N−1), and A(N−1). Here, exp(B/T(i)) is expressed using the measured DC resistance value Rdc(i) obtained in step S8and T(i) obtained in step S5. Ra(N-1) and A(N-1) are the previously estimated values of Ra and A respectively. The sequential least squares method identifies the parameters as follows. First, the measured DC resistance value Rdc (see Expression (2)) and the estimated DC resistance value Rdc′(see Expression (1)) are set as follows. [Expression⁢(5)⁢and⁢(6)]y⁡(k)=Rd⁢c(k)=zT(k)·θ⁡(k)(5)z⁡(k)=[exp⁡(BT⁡(k))1],θ⁡(k)=[A⁡(k)Ra(k)](6) Here, the subscript “T” means the transposed matrix. Further, as described above, B in the Expression (6) is an eigenvalue determined by the materials included in the secondary battery2. T is an absolute temperature of the secondary battery2, and obtained from the temperature sensor7. Therefore, the constants in Expression (5) and (6) are Ra(k) and A(k) in θ(k). The identification of Ra(k) and A(k) is performed by the following Expression (7) expressing the algorithm of the sequential least squares method. As the sequential least squares method, for example, a commonly used method can be adopted. θ′(k)=θ′(k-1)+L⁡(k)⁢ϵ⁡(k)⁢ϵ⁡(k)=y⁡(k)-ΦT(k)⁢θ′(k-1)⁢L⁡(k)=P⁡(k-1)⁢ψ⁡(k)ρ⁡(k)+ΦT(k)⁢P⁡(k-1)⁢ψ⁡(k)⁢P⁡(k)=1ρ⁡(k)[P⁡(k-1)⁢P⁡(k-1)⁢ψ⁡(k)⁢ΦT(k)⁢P⁡(k-1)ρ⁡(k)+ΦT(k)⁢P⁡(k-1)⁢ψ⁡(k)]⁢Φ⁡(k)=z⁡(k)⁢ψ⁡(k)=z⁡(k)[Expression⁢(7)] Here, P is a covariance matrix of 2×2 matrix, and L is 2×1 matrix. ρ(k) is a forgetting coefficient, and ρ(k) satisfies 0<ρ≤1. In the present embodiment, ρ(k) is appropriately determined so as to weaken the influence of the past data and strengthen the influence of the data in the vicinity of the present. As a result, the constant calculation unit15calculates the current Ra and A by weighting the past constants Ra and A according to the period (i.e., the time difference) from the present time point (k). With such an algorithm, Ra(N) and A(N) are calculated using Rdc(i), exp(B/T(i)), Ra(N-1), and A(N-1). Next, in step S10, a convergence determination is performed. The convergence determination determines that the respective values of Ra and A have converged when both the following first condition and second condition are satisfied. In contrast, when at least one of the first condition and the second condition is not satisfied, it is determined that the respective values of Ra and A have not converged. First condition: The number of calculations for Ra and A is equal to or greater than a specified number (for example, 3 times). Second condition: (i) The slope of the approximate straight line obtained from the plot obtained from (i) Ra(k-n), . . . , Ra(k-1), Ra(k), which are calculated or obtained at corresponding time points up to the present time point, on Y axis and (ii) the time (i.e., the corresponding time points up to the present time point) on X axis is equal to or less than a threshold value, and (ii) the slope of the approximate straight line obtained from the plot obtained from (i) A(k-n), . . . , A(k-1), which are calculated or obtained at corresponding time points up to the present time point, A(k) on Y axis, and (ii) the time (i.e., the corresponding time points up to the present time point) on X axis is equal to or less than a threshold value. Here, n is a natural number less than or equal to k, and determines how many past data needed to be considered in the convergence determination by going backward in time from the present time point (k). As a result, the constant calculation unit15calculates Ra(N) and A(N), based on the measured DC resistance values, which are acquired by the resistance value calculation unit13during the measurement period, which is a predetermined period from the start time point (k-n) to the present time point (k). When at least one of the first condition and the second condition is not satisfied, the process returns to step S3while retaining the logs of Ra(N) and A(N) acquired in step S9. In contrast, when both the first condition and the second condition are satisfied, it is determined that the values of Ra(N) and A(N) have converged, and Ra(N) and A(N) are determined. Then, the next estimation of Ra and A is realized by performing the process from step S3while retaining the previous logs of Ra and A. Then, in step S11, the values of Ra(N) and A(N) are updated sequentially. From the above, the present embodiment makes it possible to separately estimate the DC resistance component that does not depend on temperature and the DC resistance component that depends on temperature from the measured values of the current, voltage, and temperature in the actual patterns. It is thus possible to estimate the onboard deterioration of the DC resistance of the secondary battery2with high accuracy. The present embodiment provides the following functions and effects. In the battery state estimation apparatus1according to the present embodiment, the followings are performed. The change amount calculation unit12calculates the current change amount ΔI and the voltage change amount ΔV of the secondary battery2in a predetermined period by using the current values and voltage values of the secondary battery2acquired by the electrical information acquisition unit11by measurement. Then, the resistance value calculation unit13calculates the measured DC resistance value based on the current change amount ΔI and the voltage change amount ΔV. Further, the constant calculation unit15calculates the non-temperature dependent constant Ra and the constant A of the temperature dependent function Rb based on the estimated DC resistance value and the measured DC resistance value in the predetermined period. The estimated DC resistance value is expressed by the sum of the non-temperature dependent constant Ra and the temperature dependent function Rb. In this way, the estimated DC resistance value is expressed using both the non-temperature dependent component Ra and the temperature dependent component Rb of the secondary battery2. The non-temperature dependent constant Ra and the constant A of the temperature dependent function Rb are calculated based on the estimated DC resistance value Rdc', the measured DC resistance value Rdc, and the actual measured temperature T of the secondary battery2. Thereby, the non-temperature dependent constant Ra and the constant A of the temperature dependent function Rb can be calculated with high accuracy. This makes it possible to estimate the DC resistance value of the secondary battery2after deterioration with high accuracy. In addition, the constant calculation unit15calculates the non-temperature dependent constant Ra and the constant A of the temperature dependent function Rb based on the measured DC resistance value that is calculated by the resistance value calculation unit13during a measurement period, which is a predetermined period ranging from the start time point to the present time point. As a result, the non-temperature dependent constant Ra and the constant A of the temperature dependent function Rb are calculated using the measured DC resistance values acquired around the present time point. Here, the measured DC resistance value acquired long before the present time point may hinder the estimation with high accuracy in estimating the DC resistance value of the secondary battery2around the present time point. Therefore, by performing the above process, it is possible to estimate the DC resistance value of the secondary battery2with higher accuracy. In addition, the constant calculation unit15calculates the non-temperature dependent constant Ra and the constant A of the temperature dependent function Rb at the present time point by weighting the multiple calculated past non-temperature dependent constants Ra and the multiple calculated past constants A of the temperature dependent functions Rb according to the time from the present time point. This also makes it possible to estimate the DC resistance value of the secondary battery2with higher accuracy. In addition, the absolute value of the current change amount ΔI acquired by the electrical information acquisition unit11is equal to or greater than a predetermined value. Therefore, when the resistance value calculation unit13calculates the measured DC resistance value from ΔI and ΔV, it is easy to calculate the measured DC resistance value with high accuracy. On the other hand, if ΔI is too small, it is difficult to calculate the DC resistance value with high accuracy. In addition, the resistance value calculation unit13calculates the measured resistance value using the actually measured current values and actually measured voltage values of the secondary battery2acquired by the electrical information acquisition unit11. That is, when calculating the measured DC resistance value Rdc, the actually measured current values and the actually measured voltage values are used instead of the corrected values such as the current value and voltage value of the secondary battery2after the averaging process. Therefore, the measured DC resistance value can be calculated with high accuracy. As described above, according to the present embodiment, it is possible to provide a battery state estimation apparatus capable of estimating the DC resistance value of the secondary battery after deterioration with high accuracy. Second Embodiment In a second embodiment, the basic configuration is the same as that in the first embodiment, but as shown inFIG.6, the processing method for estimating the constants Ra and A by the battery state estimation apparatus1is partially changed. In the present embodiment, first, in step S21, i corresponding to the time axis and M corresponding to the number of times in calculation of Rdc are set to 0 (zero), respectively. Then, in step S22, the current value I(0) and the voltage value V(0), which are initial values of the current value I(i) and the voltage value V(i), are acquired by the measurement by the current sensor6and the voltage sensor5. Next, in step S23, i is incremented by 1 (one). In step S24, the current value I(1), voltage value V(1), and the value of the temperature T(1) at timing of i=1 are acquired by measurement. The timing of i=1 follows the measurement timing of the current value I(0) and the voltage value V(0) (that is, the timing in step S22). Then, in step S25, ΔI(i) and ΔV(i) are calculated in the same manner as in the first embodiment. Next, in step S26, it is determined whether or not the resistance value acquisition condition is satisfied, as in the first embodiment. Here, in the present embodiment, when it is determined that the resistance value acquisition condition is not satisfied, the process returns to step S23. Then, when it is determined in step S26that the resistance value acquisition condition is satisfied, in step S27, the measured DC resistance value Rdc(i)=ΔV(i)/ΔI(i) is calculated from ΔI(i) and ΔV(i) calculated in step S5. Then, Rdc(M)=Rdc(i) and T(M)=T(i) are set, and the logs of Rdc and T are memorized. Next, in step S28, it is determined whether or not the number of logs of Rdc(M) is equal to or greater than a predetermined number. This predetermined number can be selected to the extent that the least squares method performed in the later step S29can be performed with high accuracy. When the number of logs is not greater than a predetermined number in step S28, the logs of Rdc(M) and T(M) up to that point are retained, and M is increased by 1 (one) in step S29to return to step S23. In contrast, in step S28, when the number of logs is equal to or greater than the predetermined value, the process proceeds to step S30. In step S30, Ra(M) and A(M) are calculated using the least squares method from the log values of Rdc (M) up to the present time point and the exp(B/T(M)) obtained from the log values of T(M) up to the present time point. For the least squares method, it is possible to use a general method. Then, in step S31, the calculated values are determined respectively as Ra(M) and A(M). Then, the next estimation of Ra and A is realized by resetting M to 0 in step S32after step S31and then performing the process from step S23. Then, in step S31, the values of Ra(M) and A(M) are updated sequentially. Incidentally, among reference numerals used in the second and subsequent embodiments, the same reference numerals as those used in the embodiment already described represent the same elements as those in the embodiment already described, unless otherwise indicated. Also in the present embodiment, the similar effects as in the first embodiment are obtained. The present disclosure is not limited to the respective embodiments described above, and various modifications may be adopted within the scope of the present disclosure without departing from the spirit of the disclosure. Although the present disclosure has been described in accordance with the embodiments, it should be understood that the present disclosure is not limited to such embodiments or the relevant structures. The present disclosure also includes various modifications and variations within the equivalent scope. In addition, various combinations or modes, and other combinations or modes including one additional element, two or more additional elements, or less elements may also be included within the scope of the category or the technical idea of the present disclosure. For reference to further explain features of the present disclosure, the description is added as follows. A secondary battery deteriorates with use; the internal resistance value thus fluctuates. For example, there is disclosed a method of estimating the DC resistance value of a deteriorated secondary battery. This method for estimating the DC resistance value of the secondary battery uses a temperature coefficient that has a temperature dependence according to the Arrhenius equation, to correct the resistance value from the value of the secondary battery in the initial state. The DC resistance value of the deteriorated secondary battery is thereby estimated. The DC resistance value of a deteriorated secondary battery can also be affected by temperature-independent factors (i.e., non-temperature dependent factors). Therefore, there is room for improvement in the method for estimating the DC resistance value of the secondary battery described in the above from the viewpoint of more accurately estimating the DC resistance value of the deteriorated secondary battery. It is thus desired for the present disclosure to provide a battery state estimation apparatus capable of estimating the DC resistance value of a deteriorated secondary battery with high accuracy. An aspect of the present disclosure described herein is set forth in the following clauses. According to an aspect of the present disclosure, a battery state estimation apparatus is provided to include the followings. An electrical information acquisition unit is configured to acquire a current value and a voltage value of a secondary battery by measurement. A change amount calculation unit is configured to calculate a current change amount ΔI and a voltage change amount ΔV of the secondary battery in a predetermined period using the current value and the voltage value acquired by the electrical information acquisition unit. A resistance value calculation unit is configured to calculate a measured DC resistance value of a DC resistance based on the current change amount ΔI and the voltage change amount ΔV calculated by the change amount calculation unit. A temperature acquisition unit is configured to acquire a temperature of the secondary battery. A constant calculation unit is configured to calculate a non-temperature dependent constant Ra and a constant of a temperature dependent function Rb based on (i) an estimated DC resistance value and (ii) the measured DC resistance value calculated by the resistance value calculation unit. Here, the estimated DC resistance value is an estimated value of the measured DC resistance value in a predetermined period; the estimated DC resistance value is represented as a sum of (i) the non-temperature dependent constant Ra, which indicates a temperature independent component of the DC resistance of the secondary battery, and (ii) the temperature dependent function Rb, which indicates a temperature dependent component of the DC resistance of the secondary battery. Thus, the battery state estimation apparatus according to the above aspect is provided to express the estimated DC resistance value using both the non-temperature dependent component Ra and the temperature dependent component Rb of the secondary battery, and calculate the non-temperature dependent constant Ra and the constant of the temperature dependent function Rb based on (i) the estimated DC resistance value and (ii) the measured DC resistance value and the actually measured temperature of the secondary battery. Therefore, the non-temperature dependent constant Ra and the constant of the temperature dependent function Rb can be calculated with high accuracy. This makes it possible to estimate the DC resistance value of the deteriorated secondary battery with high accuracy. As described above, according to the above aspect, it is possible to provide a battery state estimation apparatus that can estimate the DC resistance value of the secondary battery after deterioration with high accuracy.
29,440
11860236
DETAILED DESCRIPTION Before any independent embodiments of the invention are explained in detail, it is to be understood that the invention is not limited in its application to the details of construction and the arrangement of components set forth in the following description or illustrated in the following drawings. The invention is capable of other independent embodiments and of being practiced or of being carried out in various ways. Also, it is to be understood that the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. Use of “including” and “comprising” and variations thereof as used herein is meant to encompass the items listed thereafter and equivalents thereof as well as additional items. Use of “consisting of” and variations thereof as used herein is meant to encompass only the items listed thereafter and equivalents thereof. Also, the functionality described herein as being performed by one component may be performed by multiple components in a distributed manner. Likewise, functionality performed by multiple components may be consolidated and performed by a single component. Similarly, a component described as performing particular functionality may also perform additional functionality not described herein. For example, a device or structure that is “configured” in a certain way is configured in at least that way but may also be configured in ways that are not listed. In addition, it should be understood that embodiments of the invention may include hardware, software, and electronic components or modules that, for purposes of discussion, may be illustrated and described as if the majority of the components were implemented solely in hardware. However, based on a reading of the detailed description, it should be recognized that, in at least one embodiment, electronic-based aspects may be implemented in software (e.g., instructions stored on non-transitory computer-readable medium) executable by one or more processing units, such as a microprocessor and/or application specific integrated circuits (“ASICs”). As such, it should be noted that a plurality of hardware and software-based devices, as well as a plurality of different structural components may be utilized to implement the aspects. For example, “servers” and “computing devices” described in the specification can include one or more processing units, one or more computer-readable medium modules, one or more input/output interfaces, and various connections (e.g., a system bus) connecting the components. FIGS.1A-1Dillustrate a battery pack105that may be used to provide power to electrical equipment or devices that may be used in cold environments, for example, in temperatures below 0° C. (32° F.). The electrical device (not shown) may include a power tool (for example, a drill, a saw, a pipe cutter, an impact wrench, etc.), outdoor tool (for example, a snow blower, a vegetation cutter, etc.), lighting equipment, a power source, etc. As shown inFIG.1A, the battery pack105includes a battery pack housing110. The housing110includes a plurality of openings that allow battery pack terminals115to mechanically and electrically couple the battery pack105to the power tool. In some embodiments, the battery pack terminals115may include a power line, a ground line, and one more communication lines. InFIG.1B, the housing110is removed to expose internal components of the battery pack105. As shown inFIG.1B, a printed circuit board (PCB)120includes the battery pack terminals115and control circuitry (seeFIGS.4A-4B) to control operation of the battery pack105, as explained in greater detail below. As shown inFIGS.1A-1D, the illustrated battery pack105includes a single row of five battery cells125. In other constructions, the battery pack105may include (see, e.g.,FIGS.2A-2D) more than one row of battery cells125and/or the row(s) may include fewer or more than five battery cells125(not shown). The battery cells125are held in place by a top case130and a bottom case135. Wedge elements140,145protrude from the respective cases130,135to contact an outer surface of the battery cells125(for example, to hold the battery cells125in place). The cases130,135surround the side surfaces of the battery cells125but leave the ends of the battery cells125exposed to allow them to be electrically coupled in a circuit. A temperature sensing device such as a thermistor150(seeFIG.1B) is electrically coupled to the PCB120(for example, using conductors, wires, etc.) to provide signals to the control circuitry corresponding to or representing a temperature of the interior of the battery pack105(e.g., a temperature of the battery cells125). The thermistor150is mounted on the top case130and monitors the temperature of the battery cells125through a hole in the top case130(seeFIG.1D). In alternate embodiments (not shown), the thermistor150may be located in alternate locations, such as underneath the battery cells125mounted on the bottom case135. InFIGS.1B and1D, the thermistor150is positioned above the middle battery cell125. In other embodiments (not shown), the thermistor150may be located in alternate locations, such as above or below one of the other battery cells125. In some embodiments (not shown), the battery pack105may include two or more thermistors150. For example, the battery pack105may include a first thermistor located above the left-most battery cell125and a second thermistor located above the right-most battery cell125. InFIG.1C, the housing110is removed, and, inFIG.1D, the housing110, the bottom case135, and two battery cells125have been removed. As mentioned above and as shown inFIG.1D, the top case130includes a hole to allow the thermistor150to monitor the temperature of the battery cells125. FIGS.2A-2Dillustrate another construction of a battery pack205that may be used to provide power to electrical equipment or devices that may be used in cold environments, described above. The battery pack205is similar to the battery pack105described above, and common elements have the same reference number plus “100”. The following description will focus on aspects of the battery pack205different than the battery pack105. It should be noted, however, that features of battery pack205may be incorporated or substituted into the battery pack205, or vice versa. As shown inFIGS.2A-2D, the battery pack205includes two rows of five battery cells225. In other constructions, the battery pack205may include (see, e.g.,FIGS.1A-1D) one row of battery cells or more than two rows of battery cells125(not shown) and/or the row(s) may include fewer or more than five battery cells125(not shown). The battery cells225are held in place by an interior case230that surrounds side surfaces of the battery cells225but leaves ends of the battery cells225exposed to allow them to be electrically coupled in a circuit. Spacers237are provided between each pair of battery cells225to further hold the battery cells225in place. Each spacer237extends from one end of an associated pair of battery cells225to the other end and makes contact with the side surfaces of the associated pair of battery cells.225. AlthoughFIG.2Bshows an individual spacer237for each pair of battery cells225, in some embodiments (not shown), multiple spacers237(e.g., all five illustrated spacers237) may be formed into a single unit (in other words, a planar spacer with wedge elements similar to the wedge elements140and145of the battery pack105). The battery pack205includes a thermistor250electrically coupled to the PCB220. As shown inFIG.2B, the thermistor250is mounted on top of the interior case230and monitors the temperature of the battery cells225through a hole in the top of the interior case230. In other constructions (not shown), the thermistor250may be positioned in another location such as between rows of battery cells225, on a spacer235, etc. FIG.3illustrates a row of battery cells325that may be included in the battery pack105or205. The battery cells325may correspond to the battery cells125or225, described above. As shown inFIG.3, one or more heating elements360may be placed between the battery cells325and on the outermost battery cells325and contact a side surface of the battery cells325. The heating elements360are generally located within the battery packs105,205in an area away from the thermistors150,250to ensure that the temperature measured by the thermistor150,250corresponds to the temperature of the battery cells125,225, rather than of the heating elements360. For example, in the battery pack105(seeFIGS.1A-1D), the heating elements360may be located underneath the battery cells125with the thermistor150located on top of the battery cells125. As another example, in the battery pack205(seeFIGS.2A-2D), the heating elements360may be located between rows of battery cells225and/or underneath the bottom row of battery cells225. In embodiments in which the thermistor150,250is located in an alternate location, the heating elements360may be located in alternate location away from the thermistor150,240. As shown inFIG.3, the heating elements360may include resistors or other heat-generating electrical component. For example, as shown inFIG.3, six (6) twenty ohm (20Ω) resistors are connected in parallel and operable to generate approximately thirty watts (30 W) of heating energy. In other embodiments (not shown), the heating elements360include carbon fibers (e.g., high density (3 k, 6 k, 12 k, etc.) carbon fibers), resistive heating coils formed of carbon fibers, etc. The carbon fiber heating elements360may be directly laid under and/or between the battery cells325. Such carbon fiber heating elements360are disclosed in U.S. Patent Application Publication No. US 2011/0108538, published May 12, 2011, and in U.S. Patent Application Publication No. US 2015/0271873, published Sep. 24, 2015, the entire contents of which are hereby incorporated by reference. In other constructions (not shown), the carbon fiber may be formed in as a jacket for one or more battery cells325. The carbon fiber may be formed as a rubber jacket (e.g., molded into or surrounded by rubber material). The carbon fiber jacket may hold the battery cell(s)305in place within the battery packs105and205. In some embodiments, heating elements360are embedded within the wedge elements140and/or145of the cases130,135of the battery pack105. Similarly, in some embodiments, heating elements360are embedded in the interior case230and/or the spacers235of the battery pack205. In alternate embodiments (not shown), the heating elements360may be located in a pad located underneath the battery cells125,225or between rows of battery cells225. Such a pad may be included in the battery pack105,205for vibration reduction but may also include heating elements360. For example, the pad may be made of carbon fiber material, as described above, that conducts electricity to generate heat. The pad of heating elements360may be molded or embedded into the housing110,210of the battery pack105,205(for example, in the interior of the bottom of the housing110,210). The heating element(s)360may provide heat to a secondary material that distributes heat to the battery cells325. For example, the battery packs105,205may include (not shown) a container, reservoir or pouch of secondary material such as wax, mineral oil, water, or other material. The container of secondary material may be in contact with the heating element(s)360and with the outer surface of the battery cells325. The heating element(s)360provide heat to the secondary material, and, in turn, the heated secondary material provides heat to the battery cells325. In further alternate embodiments (not shown), the heating elements360may be located inside individual jackets of each battery cell325. In such embodiments, additional terminals may be provided on the battery cells325to provide power to the heating elements360. In other embodiments (not shown), the heating elements360may be positive temperature coefficient thermistors (PTCs), the resistance of which increases as the temperature increases. Accordingly, using PTCs as the heating elements360provides another method of limiting the current drawn by the heating elements360. For example, when the PTCs draw too much current and heat up beyond their rated temperature, their resistance increases to essentially create an open circuit. In some embodiments, the rated temperature of the PTCs is approximately 75° F. to 80° F. Other PTCs (e.g., 100° F., 150° F., etc.) may be selected for the desired safety level, heating capacity/operation, etc. FIG.4Ais a block diagram of the battery pack105,205coupled to a charger405. As shown inFIG.4A, the battery pack105,205includes an electronic processor410(for example, a microprocessor or other electronic controller), a memory415, an indicator (for example, one or more light-emitting diodes (LEDs)420), and the thermistor150,250. The battery pack105,205also includes a charging switch425(for example, a field-effect transistor (FET)) electrically coupled between the charger405and the battery cells125,225. The battery pack105,205also includes a heating switch430(for example, a FET) electrically coupled between the charger405and the heating elements360. The memory415may include read only memory (ROM), random access memory (RAM), other non-transitory computer-readable media, or a combination thereof. The processor410is configured to receive instructions and data from the memory415and execute, among other things, the instructions. In particular, the processor410executes instructions stored in the memory415to control the states of the switches425and430(for example, based on the temperature of the battery cells125,225as explained below). The processor410is also configured to control the LEDs420(for example, to indicate a charging status of the battery pack105,205or to indicate a condition of the battery pack105,205) and receive electrical signals relating to the temperature of the battery cells125,225(for example, from the thermistor150,250). FIG.4Bis a circuit diagram of a portion of the battery pack105,205. As shown inFIG.4B, the heating elements360and the battery cells125,225are coupled in parallel with each other to the charger405. The battery cells125,225are coupled to the charger405through a series combination of the charging switch425and a charging fuse435. The heating elements360are coupled to the charger405through a series combination of the heating switch430and a heating fuse440. The switches425,430are controlled by the processor410to allow or prevent current from the charger405to flow to the battery cells125,225and the heating elements360, respectively. The fuses435,440are used to prevent the battery cells125,225and the heating elements360, respectively, from drawing too much current from the charger405. For example, if the charging switch425or the heating switch430fails such that the charging switch425or the heating switch430is in a permanently closed state (in other words, in a conducting state), the corresponding fuse435,440may trip to prevent current flow to the battery cells125,225and the heating elements360, respectively. For example, the heating elements360may draw approximately three amps (3.0 A) of current from the charger405during normal operation. However, if the heating switch430fails and cannot prevent current from flowing to the heating elements360as desired, the heating fuse440may be configured to trip (to prevent current from flowing to the heating elements360) at approximately 4.0 to 4.5 A. Accordingly, in some embodiments, the heating fuse440may prevent the heating elements360from experiencing a current spike of 6 A. In some embodiments (not shown), the battery pack105,205includes a second charging switch (for example, another FET) in series with the charging switch425. In such embodiments, the second charging switch allows the current drawn by the battery cells125,225to be controlled when one of the charging switch425and the second charging switch fails such that it is in a permanently closed state. In some embodiments, the second charging switch is in series with the charging switch425between the charging switch425and the charging fuse435. As shown inFIG.4B, the charging switch425includes a drain445, a gate450, and a source455. In some embodiments, the source455of the charging switch425is coupled to a source of the second charging switch and a drain of the second charging switch is coupled to the charging fuse435such that the second charging switch has an opposite orientation of the charging switch425. In some embodiments, the battery pack105,205includes components (not shown) to detect if the heating switch430has failed (e.g., is in a permanently closed state). For example, a resistance network below the heating switch430may be used to detect whether the heating switch430is in a permanently closed state. As another example, components in the circuit may allow the voltage across the heating elements360to be measured directly. Based on voltage measurements from the resistance network or the heating elements360, the processor410may determine that the heating switch430has failed and is in a permanently closed state. When the processor410makes such a determination, the processor410may prevent the battery pack105,205from being charged by, for example, opening the charging switch425to prevent current from flowing to the battery cells125,225. Alternatively or additionally, the processor410may provide an output that indicates that the heating switch430has failed (for example, by controlling the LEDs420to illuminate in a predetermined manner). FIG.5is a flowchart of a method500of charging the battery pack105,205performed by the processor410. By executing the method500, the processor410controls the state of the switches425,430when the battery pack105,205is coupled to the charger405based on signals received from the thermistor150,250that relate to the temperature of the battery cells125,225. At block505, the processor410determines that the battery pack105,205is coupled to the charger405. For example, the processor410may make such a determination by recognizing a change in voltage on the battery pack terminals115,215. At block510, the processor410receives a signal from the thermistor150,250that indicates a temperature of the battery cells125,225. In some embodiments, the processor410alternatively receives a signal from a thermistor that senses a temperature outside of the pack105,205(e.g., an ambient air sensor as explained in greater detail below). In other embodiments, the processor receives a signal from a thermistor of another device (e.g., a thermistor of the charger405via a communication terminal of the battery pack terminals115,215). At block515, the processor410determines whether the temperature of the battery cells125,225is above predetermined temperature threshold (for example, 0° C.). In some embodiments, the predetermined temperature threshold may vary depending on the chemistry of the battery cells125,225. In other words, battery cells of first chemistry may require that the temperature of the battery cells be above a different predetermined temperature threshold than battery cells of a second chemistry. If necessary, the processor410determines the predetermined temperature threshold for the chemistry of the battery cells125,225. When the temperature of the battery cells125,225is not above the predetermined temperature threshold, the processor410does not allow the battery cells125,225to be charged. Accordingly, at block520, the processor410opens the charging switch425(to prevent the battery cells125,225from receiving power from the charger405). At block520, the processor410also closes the heating switch430to provide power to the heating elements360. The method500proceeds back to block510to monitor the temperature of the battery cells125,225and, at block515, determines whether the temperature of the battery cells125,225has increased above the predetermined temperature threshold. When the temperature of the battery cells125,225is above the predetermined temperature threshold, at block525, the processor410closes the charging switch425to provide power to the battery cells125,225to charge the battery cells125,225. In some embodiments, at block525, the processor410opens the heating switch430to stop providing power to the heating elements360. In other embodiments, the processor410may control the heating switch430to maintain its closed state to continue to provide power to the heating elements360(for example, to help ensure that the temperature of the battery cells125,225remains above the predetermined temperature threshold (e.g., above 0° C.)). In yet other embodiments, the processor410may control the heating switch430using a pulse width modulation (PWM) signal to periodically provide power to the heating elements360during charging of the battery cells125,225to help ensure that the temperature of the battery cells125,225remains above the predetermined temperature threshold. In such embodiments, the processor410may maintain the heating switch430in the closed state and/or provide the PWM signal to the heating switch430based on an ambient air temperature received from an ambient air sensor (for example, another thermistor) that determines the temperature outside the battery pack105,205. For example, when the ambient air temperature is below the predetermined temperature threshold or is below a second predetermined temperature threshold (for example, a temperature lower than the predetermined temperature threshold), the processor410may maintain the closed state of the heating switch430or control the heating switch430using a PWM signal. In some embodiments, the duty cycle of the PWM signal is based on the ambient air temperature sensed by the ambient air sensor. At block530, the processor410determines whether charging of the battery cells125,225is complete. For example, the processor410may monitor a voltage of the battery cells125,225to make such a determination. As another example, the charger405may monitor the voltage of the battery cells125,225and may send a signal to the processor410(for example, through communication terminals of the battery pack terminals115,215) to indicate to the processor410that charging is complete. When charging is not complete, the method500proceeds back to block510to monitor the temperature of the battery cells125,225. Accordingly, the processor410repeats blocks510,515,525, and530as long as the temperature of the battery cells125,225is above the predetermined temperature threshold and charging of the battery cells125,225is not yet complete. When charging of the battery cells125,225is complete, at block535, the processor410opens the charging switch425to stop charging the battery cells125,225. After the battery cells125,225have been charged, the processor410may open the heating switch430to prevent the heating elements360from receiving power from the charger405. In other embodiments, the processor410may control the heating switch430to maintain the heating elements360in a state of low power maintenance heating so that the battery pack105,205may be more easily charged again later. For example, the processor410may control the heating switch430using a PWM signal based on an ambient air sensor, as described above. In such embodiments, the heating elements360may receive power from the battery cells125,225when the battery pack105,205is removed from the charger405. While providing power to the heating elements360from the battery cells125,225may deplete the battery cells125,225more quickly, it may also allow the temperature of the battery cells125,225to be maintained above the predetermined temperature threshold. Accordingly, in some embodiments, the battery cells125,225may charge more quickly when coupled to the charger405than if the heating elements360were not controlled to provide low power maintenance heating to the battery cells125,225. As described above, in some embodiments, the heating elements360increase the temperature of the battery cells125,225from below the predetermined temperature threshold to meet or exceed the predetermined temperature threshold in time period (e.g., approximately six minutes). When the battery cells125,225are above the predetermined temperature threshold, full charging current can be drawn by the battery pack105,205in environments in which the ambient temperature is below the predetermined temperature threshold after the battery pack105,205has been coupled to the charger405for the time period (again, after about six minutes). It should be understood that each block diagram is simplified and in accordance with an illustrated embodiment. The block diagrams illustrate examples of the components and connections, and fewer or additional components/connections may be provided. For example, in some embodiments, the battery pack105,205, and605also includes an ambient air sensor (for example, another thermistor) that monitors the temperature outside the housing110,210of the battery pack105,205. As another example, as described above with respect toFIG.4B, the battery pack105,205may include additional circuitry (for example, a resistance network) to detect a failure of the heating switch430(e.g. that the switch430is in a permanently closed state). Similarly, the flowcharts inFIGS.5and7are simplified and illustrates an example, and fewer or additional steps may be provided. FIGS.6A-6Dillustrate another construction of a battery pack605that may be used to provide power to electrical equipment or devices. The battery pack605is similar to the battery packs105and205described above, and common elements have the same reference number in the “600” series. The following description will focus on aspects of the battery pack605different than the battery packs105and205. It should be noted, however, that features of battery pack605may be incorporated or substituted into the battery pack105,205, or vice versa. As shown inFIGS.6A-6D, the battery pack605includes three rows of five battery cells. While the battery cells are not shown inFIGS.6A-6D, the location of the battery cells is apparent based on the holes in an interior case630as shown inFIG.6C. In other constructions, the battery pack605may include (see, e.g.,FIGS.1A-1D) one row of battery cells, two rows of battery cells (see, e.g.,FIGS.2A-2D), or more than three rows of battery cells (not shown) and/or the row(s) may include fewer or more than five battery cells (not shown). The battery cells are held in place by the case630surrounding side surfaces of the outer battery cells but leaving ends of the battery cells exposed to allow them to be electrically coupled in a circuit (for example, by connectors632shown inFIG.6B). The illustrated case630includes a left case portion636and a right case portion638. In some embodiments, spacers (not shown) are provided between each pair of battery cells to further hold the battery cells in place (see, e.g., spacers237ofFIGS.2C and2D). The battery pack605includes one or more temperature sensing devices such as thermistors650electrically coupled to the PCB620. As shown inFIGS.6B-6C, the thermistor650is mounted on top of the case630and monitors the temperature of the interior of the battery pack605(i.e., a temperature of the battery cell(s)) through a hole in the top of the case630. In the illustrated construction, the thermistor(s)650are located near the PCB620so that less wiring is used to couple the thermistor(s)650to the PCB620compared to thermistors located further from the PCB620. In some embodiments, the battery pack605includes additional thermistors650in other locations, as described previously. For example,FIG.6Dshows a cut-away view of the battery pack605from the bottom of the battery pack605. In this example, the battery pack605includes five thermistors650mounted on the top of the case630and coupled to the PCB620. In some embodiments, each thermistor650may measure a temperature of a respective battery cell or string of battery cells. For example, each thermistor650ofFIG.6Dmay measure the temperature of the string of three battery cells located proximate the associated thermistor650. In other constructions (not shown), the thermistors650may be positioned in other locations, such as between rows of battery cells, mounted on the bottom or sides of the case630, etc. In other constructions (not shown), one or more of the thermistors650may be mounted on the weld/conductive strap connected to a battery cell. As described above with respect to the battery packs105and205, in some embodiments, the battery pack605includes resistors that, for example, may be used as heat-generating components360to heat the battery pack605in cold temperatures. Also as explained previously and as shown inFIG.4B, the battery cells and these resistors (i.e., heating elements360) are coupled in parallel with each other. Accordingly, these resistors may receive power from the battery cells to, for example, maintain the heating elements360in a state of low power maintenance heating. These resistors may also be used for other purposes. In some embodiments, in addition or as an alternative to being used as heat-generating components360, these resistors may be used to discharge one or more battery cells of the battery pack605to, for example, prevent failure of the battery pack605when an abnormal condition is detected (e.g., when abnormal temperatures are detected by one or more of the thermistors650). FIG.7is a flowchart of a method of monitoring for inhibiting failure of the battery pack650when a failure condition of the battery pack605is detected. In the illustrated method, failure may be inhibited by discharging one or more battery cells of the battery pack605when a failure condition is detected. At block705, the processor410receives a signal from one or more of the thermistors650indicating a temperature of one or more battery cells of the battery pack605. Based on the signal(s) from the thermistor(s)650, at block710, the processor410determines whether the battery pack605is in a failure condition. When the battery pack is determined not to be in a failure condition (at block710), the method700proceeds back to block705to continue to monitor the temperatures measured by the thermistors650. To determine a failure condition, the processor410may, for example, determine that the battery pack605is in a failure condition based on a temperature differential between temperature measurements from two different thermistors (e.g., one temperature measurement is ten degrees higher than one or more other temperature measurements). As another example, when any one of the thermistors650transmits a signal indicating that the temperature is above a predetermined temperature threshold, the processor410may determine that the battery pack is in a failure condition. In response to determining that the battery pack605is in a failure condition, at block715, the processor410may control the switches425,430such that one or more battery cells are discharged through the resistors (i.e., the heating elements360). In some embodiments, the processor410may discharge the entire battery pack605(i.e., all battery cells). In other embodiments, the processor410may discharge a subset of the battery cells (i.e., a string of battery cells whose temperature was determined to be higher than that of the other strings of battery cells). In some embodiments, it may be undesirable to produce excessive heat when discharging the battery cells after a failure condition is determined, for example, in constructions in which the resistors are also used as heat-generating components360, as the excessive heat will be transferred back to the cells experiencing a failure condition. Accordingly, the processor410may control the switches425and/or430to discharge that battery cells using a PWM signal. Using the PWM signal to discharge the battery cells causes less current to flow through the resistors per unit of time such that the heat generated by the resistors is less than when current is allowed to flow through the resistors at all times. To reduce heat transfer to the battery cells during discharge through the resistors, in some embodiments, the battery pack605includes resistors that are not used as heating elements360. In other words, the primary purpose of such resistors would be to allow for battery cell discharge when a failure condition is detected by the processor410rather than as heating elements as described previously. In such embodiments, the resistors may be thermally separated and isolated from the battery cells. For example, the resistors may be insulated from the cells (e.g., by mica tape), located outside of the case630, thermally coupled to a heat sink exposed to an air flow path to be cooled, etc. In such embodiments, the processor410may optionally control the switches to discharge the battery cells using a PWM signal to further reduce possible heating. In some embodiments, the processor410monitors the temperature from the thermistors650and discharges the battery cells through the resistors when the battery pack605is not coupled to a device, such as the charger405or a power tool. In other embodiments, the processor410may also execute the method when the battery pack605is connected to a device. In some embodiments, the battery pack605may detect a failure condition in other manners besides monitoring temperature(s) measured by thermistors650.FIG.8is a block diagram of the battery pack605according to one such embodiment. As shown inFIG.8, the battery pack605includes conductive plates805to determine whether fluid has entered the battery pack housing610and to measure the conductivity of such fluid (i.e., ingress fluid; conductivity being equal to Siemens per meter with Siemens being current divided by voltage). FIG.9is a bottom perspective view of the battery pack605with the housing610removed. As shown inFIG.9, the battery pack605includes a number of (e.g., two) conductive plates805located underneath the battery cells. The conductive plates805may, for example, be mounted on a single PCB, separate PCBs, stand-offs, or directly on the bottom of the interior case630. Locating the conductive plates805proximate or on the bottom of the interior case630allows for detection of ingress fluid when the battery pack605is placed in an area that has standing fluid, for example. As another example, such conductive plates805may detect ingress fluid if enough ingress fluid has entered the housing610to create a pool of fluid at the bottom of the battery pack605. In some embodiments, the conductive plates805are located elsewhere in the battery pack605(e.g., on the sides or top of the interior case630). In some embodiments, the battery pack605includes additional conductive plates805in other locations (i.e., multiple sets of conductive plates805). In some embodiments, the conductive plates805are located within, for example, one millimeter, two millimeters, or three millimeters of each other such that the conductivity of a small amount of ingress fluid can be detected and measured. The closer together conductive plates805are located, the less fluid is required to measure conductivity. In some embodiments, the conductive plates805are located in a stacked arrangement such that, when ingress fluid is present in the battery pack605, current flows between the largest surfaces of the conductive plates805when a voltage is applied to the conductive plates as described in greater detail below. FIG.10Ais a bottom view of the battery pack605according to one embodiment. As shown inFIG.10A, the conductive plates805are coupled to a PCB1005via wires1010. The PCB1005may be coupled to the PCB620via additional wires (not shown) to allow the processor410to measure the conductivity of ingress fluid using the conductive plates805as explained in greater detail below. In some embodiments, the wires1010may connect the conductive plates805directly to the PCB620(i.e., the PCB1005may not be present in some embodiments). FIG.10Bis a bottom view of the battery pack605according to another example embodiment. As shown inFIG.10B, conductive plates1015are smaller than the conductive plates805ofFIG.10Abut perform a similar function. In some embodiments, the conductive plates1015are mounted on a PCB1020that is coupled to the PCB620via wires (not shown) to allow the processor410to measure the conductivity of ingress fluid using the conductive plates1015. In some embodiments, an off-the-shelf conductivity sensor is used alternatively or in addition to the conductive plates805and1015. For example, the conductivity sensor may be a contacting conductivity sensor or an inductive conductivity sensor (e.g., toroidal or electrodeless). FIGS.11A and11Billustrate methods1100and1150of measuring the conductivity of ingress fluid in the battery pack605according to one embodiment. With reference toFIG.11A, at block1105, the processor410determines the conductivity between the conductive plates805, as explained in greater detail below with respect toFIG.11B. At block1110, the processor410determines whether the conductivity is above a predetermined conductivity threshold (e.g., approximately 4.5 Siemens per meter). When the conductivity is less than the predetermined conductivity threshold, the method1100proceeds back to block1105such that the processor410continues to determine the conductivity between the conductive plates805. At block1110, when the conductivity is above the predetermined threshold, at block1115, the processor410disables the battery pack605by, for example, discharging the battery cells through the heat-generating components360or other resistors, as explained previously with respect to block715ofFIG.7. FIG.11Billustrates the method1150that may be executed by the processor410in some embodiments to measure the conductivity of ingress fluid in the battery pack605. When executing the method1150, the processor410periodically measures the conductivity between the conductive plates805(e.g., every five seconds, one second, one hundred milliseconds, etc.). At block1155, the processor410determines whether it is time for a conductivity measurement (i.e., whether the preset periodic time has elapsed since the previous conductivity measurement). When it is not yet time for a conductivity measurement, the method1150remains at block1155. When it is time for a conductivity measurement, at block1160, the processor410provides a voltage to the conductive plates805. At block1165, the processor410measures a current across the conductive plates805. At block1170, the processor410calculates the conductivity between the conductive plates805based on the voltage provided by the processor410, the current measured by the processor410, and the size of the conductive plates805. For example, the processor410may calculate the conductivity of the ingress fluid by dividing the measured current by the provided voltage and by then dividing the result by the surface area of the conductive plates805. At block1175, the processor410determines whether the calculated conductivity is above a predetermined threshold. In some embodiments, the predetermined threshold may be approximately or just below a conductivity of sea water, which is approximately 4.8 Siemens per meter, to ensure that the processor410disables the battery pack605when an ingress fluid with a conductivity greater than or equal to the conductivity of sea water is detected. For example, the predetermined threshold may be set to approximately 4.5 Siemens per meter. At block1175, when the calculated conductivity is less than the predetermined threshold, the processor410determines that no ingress fluid is present or that the ingress fluid has a conductivity that is not a risk to cause a failure condition of the battery pack605. Accordingly, the method1150proceeds back to block1155to continue to measure the current and calculate the conductivity between the conductive plates205at periodic time periods. At block1175, when the calculated conductivity is greater than or equal to the predetermined threshold, at block1180, the processor410disables the battery pack605. For example, the processor410may discharge the battery cells through the heat-generating components360or other resistors as explained previously with respect to block715ofFIG.7. In some embodiments, the processor410may discharge parallel strings of battery cells (e.g., five of the fifteen battery cells) separately to reduce the amount of current being discharged by the battery cells that may be conducted by the ingress fluid, as explained in greater detail below with respect toFIG.12. For example, a single string of five battery cells (e.g., 18650 cells (having a diameter of 18 mm and a length of 65 mm), 20700 cells, and 21700 cells) or two strings of five battery cells (e.g., 18650 cells) may not provide enough current such that the current can be conducted through the ingress fluid. Accordingly, discharging a single string of five battery cells one at a time or two strings of battery cells instead of all battery cells may inhibit or prevent further failure of the battery pack605. As shown inFIG.12, the battery cells of the battery pack605are separated into three separate strings of battery cells1205. In some embodiments, each string of battery cells1205has five battery cells in series, and the strings of battery cells1205are connected in parallel to the battery terminals615. In some embodiments, the strings of battery cells1205are also connected in parallel to the heat-generating components360or other resistors, as explained previously with respect to block715ofFIG.7, to allow the battery cells to be discharged through the heat-generating components360or other resistors. The battery pack605also includes FETs1210. As shown inFIG.12, the battery pack605includes a FET1210between each battery cell of the second and third strings of battery cells1205. In some embodiments, the processor410may control the FETs1210to separate the strings of battery cells1205. For example, when a failure condition of the battery pack605is detected, the processor410opens the FETs1210to prevent current flow through the FETs1210. Accordingly, the third string of battery cells1205(i.e., the bottom string of battery cells1205inFIG.12) is isolated and the battery pack605is effectively a battery pack with two parallel strings of five battery cells. In some embodiments, the processor410discharges the two parallel strings of battery cells1205(i.e., the top two strings of battery cells1205inFIG.12). In some embodiments, the single isolated string of battery cells1205(i.e., the bottom string of battery cells1205inFIG.12) is self-discharged. In other words, the stored charge of the isolated string of battery cells1205may be reduced without a connection between the electrodes of the battery cells to, for example, the heat-generating components360or other resistors. In some embodiments, two parallel strings of battery cells (e.g.,18650battery cells) may not provide enough current to be conducted through ingress fluid in the battery pack605. In some embodiments, the processor410prevents the FETs1210from being closed once a failure condition is detected and the FETs1210are opened. AlthoughFIG.12shows the FETs1210located between the second and third strings of battery cells1205, in other embodiments, the FETs1210are located between the first and second strings of battery cells1205. In such embodiments, when the processor410opens the FETs1210in response to a failure condition being detected, the second and third strings of battery cells1205(i.e., the bottom two strings of battery cells1205inFIG.12) are isolated, and the battery pack605is effectively a battery pack with one parallel string of five battery cells. Similar toFIG.12,FIG.13is a circuit diagram of a portion of the battery pack605according to another embodiment. As shown inFIG.13, the battery pack605includes FETs1210between the first and second strings of battery cells1205and between the second and third strings of battery cells1205. Accordingly, upon detection of a failure condition, the processor410can open the FETs1210to individually isolate each string of battery cells1205. In some embodiments, each isolated string of battery cells1205may be self-discharged. Similar toFIGS.12and13,FIG.14is a circuit diagram of a portion of the battery pack605according to yet another embodiment. The battery pack605includes switches1405between the strings of battery cells1205that can be used to separate or isolate the strings of battery cells1205. For example, the switches1405may be electromechanical switches actuated by the FETs1210controlled by the processor410. In some embodiments, when a failure condition of the battery pack605is detected, the processor410controls the FETs1210to open the switches1405to prevent current flow through the switches1405. Accordingly, the third string of battery cells1205(i.e., the bottom string of battery cells1205inFIG.12) is isolated and the battery pack605is effectively a battery pack with two parallel strings of five battery cells. In some embodiments, the processor410discharges the two parallel strings of battery cells1205(i.e., the top two strings of battery cells1205inFIG.12), as explained above with respect toFIG.12. In any of these embodiments, each isolated string of battery cells1205that is isolated from the heat-generating components360or resistors may be self-discharged. In some embodiments, fuses may be used in place of the FETs1210and/or the switches1405. In such embodiments, after a failure condition is detected, all strings of battery cells1205may begin discharging at the same time (for example, through the heat-generating components360or other resistors) until the current through the fuses exceeds a predetermined limit and causes one or more of the fuses to blow and prevent current flow. When this occurs, a reduced number of strings of battery cells (e.g., one or two strings of battery cells1205) will continue to discharge. In embodiments that use fuses, the processor410may not need to prevent the FETs1210from later allowing current to flow after a failure condition has been detected (as mentioned previously) because once the fuses blow, they will prevent current from flowing until the fuses are replaced. AlthoughFIG.14shows the switches1405and the FETs1210between each battery cell of the second and third strings of battery cells1205, in some embodiments, the battery pack605may additionally or alternatively include switches1405, FETs1210, and/or fuses located between the first and second strings of battery cells1205similar to previous embodiments described herein. Once the battery pack605has been disabled, the battery pack605may remain non-functional. However, in some embodiments, after the battery pack605is disabled or discharged and the failure condition is no longer detected, the processor410may control the FETs1210to allow the battery pack605to function normally (e.g., provide current from all strings of battery cells1205). In some embodiments, after the battery pack605is disabled and one or more strings of battery cells have been isolated, the processor410controls the FETs1210to allow the battery pack605to function normally in response to detection of an external resistor bank attachment being coupled to the battery pack605as described in greater detail below. Although disabling the battery pack605and discharging the battery cells through resistors in response to detecting a failure condition has been described with respect to the battery pack605, the battery packs105,205may include such features and functionality in some embodiments. For example, the battery pack205may include two parallel strings of battery cells (e.g., 20700 cells or 21700 cells). In some embodiments, FETs, other switches, fuses, or a combination thereof may be located between each battery cell of the two parallel strings of battery cells to allow a processor of the battery pack205to isolate the strings of battery cells when a failure condition is detected. Once the strings of battery cells are isolated, the processor may control one or two strings of battery cells to discharge through the heat-generating components360or other resistors as explained previously. In any of the above embodiments, each isolated string of battery cells1205that is isolated from the heat-generating components360or resistors (e.g., the bottom string of battery cells1205ofFIG.12) may be self-discharged. Accordingly, when a failure condition (e.g., based on temperature, ingress fluid detection, etc.) of a battery cell, a string of battery cells, or a battery pack is detected, the battery pack can be partially or completely discharged or disabled using various methods as described herein. In some embodiments, the battery packs105,205, and605may include an indicator that conveys information to a user. For example, the indicator may be a light-emitting diode (LED), a speaker, etc. In some embodiments, the indicator may indicate that the processor has detected a failure condition of the battery pack (e.g., based on temperature, ingress fluid detection, etc.). In some embodiments, the battery pack may include a wireless communication transceiver that transmits a signal to an external device (e.g., smart phone) that indicates that the processor has a detected a failure condition of the battery pack. Although the previously-described embodiments explain discharging the battery cells through resistors (e.g., heat-generating components360or other resistors) that are integrated with the battery pack, in some embodiments, an external resistor bank attachment may be coupled to the battery pack105,205, or605to discharge the battery pack through one or more resistors in the external resistor bank attachment. In some embodiments, the external resistor bank attachment is a cap that couples to the terminals of the battery pack. In some embodiments, the external resistor bank attachment is used to fully discharge a battery pack that has been disabled using one of the previously-described methods. For example, when a battery pack is disabled and a failure condition is indicated (e.g., by an LED, a speaker, communication to a smart phone, etc.), the external resistor bank attachment may be coupled to the battery pack to discharge the battery cells through the external resistor bank attachment. In some embodiments, isolated strings of battery cells (e.g., the bottom string of battery cells1205ofFIG.12) are discharged through the external resistor bank attachment. For example, the processor of the battery pack may determine that the external resistor bank attachment is coupled to the battery pack (e.g., by identifying a known resistance of the external resistor bank attachment, by communicating with a processor of the external resistor bank attachment, etc.). After the other strings of battery cells (e.g., the top two strings of battery cells1205inFIG.12) in the battery pack are discharged as explained previously, the processor of the battery pack may reconnect the isolated string or strings of battery cells such that the previously-isolated strings of battery cells are able to discharge through the external resistor bank attachment. In some embodiments, the battery pack105,205, and605may additionally or alternatively shut down upon determining that a failure condition of the battery pack exists based on additional characteristics than those described above. The processor of the battery pack may shut down or prevent operation of the battery pack (e.g., by controlling a switch that prevents current from flowing from the battery cells to an attached device such as a power tool). In other embodiments, the processor of the battery pack may communicate to a processor of an attached device (e.g., a power tool, a charger, etc.) that the battery pack should not be used. In some embodiments, the processor of the battery pack may shut down the battery pack in response to, for example, an over-temperature determination (e.g., the temperature of the battery pack exceeds a predetermined threshold), an overcharge determination (e.g., the state of charge of the battery pack exceeds a predetermined threshold), an under-temperature determination (e.g., the temperature of the battery pack is below a predetermined threshold), and an undercharge determination (e.g., the state of charge of the battery pack is below a predetermined threshold). Accordingly, in some embodiments, the processor of the battery pack105,205, and605determines different types of failure conditions and executes different amelioration techniques based on the determined type of failure condition. For example, the battery pack may shut down to prevent operation in response to an over-temperature, overcharge, under-temperature, or undercharge condition of the battery pack. Continuing this example, the battery pack may discharge one or more strings of battery cells through internal or external resistors or by isolating one or more strings of battery cells in response to an over-temperature condition of an individual battery cell (or group of battery cells) or detection of conductive ingress fluid within the battery pack. Although the invention has been described in detail with reference to certain preferred embodiments, variations and modifications exist within the scope and spirit of one or more independent aspects of the invention as described.
53,671
11860237
DETAILED DESCRIPTION Specific structural or functional description with regard to embodiments of the present disclosure disclosed in the present specification or application are exemplified solely for describing embodiments of the present disclosure, and the embodiments of the present disclosure may be carried out in various forms and shall not be interpreted as being limited to the embodiments described in the present specification or application. In the following, embodiments of the present disclosure shall be described in detail with reference to the attached drawings. FIG.1is a simplified drawing of the system for diagnosing battery life according to one embodiment of the present disclosure,FIG.2is a graph illustrating relaxation voltage in the method for diagnosing battery life according to one embodiment of the present disclosure,FIG.3is a flow chart illustrating the system for diagnosing battery life according to one embodiment of the present disclosure, andFIG.4is a table showing the SOH (State Of Health) map for a system for diagnosing battery life, whose inputs are changes in battery state of charge, temperature data and relaxation voltage, and whose output is battery life and status. FIG.1is a simplified drawing of the system for diagnosing battery life according to one embodiment of the present disclosure. Referring toFIG.1, a system for vehicle battery life diagnosis40, the system comprised of a measuring unit10which is installed inside a vehicle and measures a state of charge change and a temperature of a vehicle battery, a calculating unit20which calculates battery relaxation voltage when a battery power line is cut off, and a diagnosis unit30which uses the state of charge change and temperature measured at the measuring unit10and the battery relaxation voltage calculated at the calculating unit20to diagnose the life and status of the battery50is provided. A battery is one of the core components which determine the speed and range of an electric vehicle and provides electrical power, which is important in an electric vehicle, and secondary batteries such as lithium ion batteries are generally used as such batteries. The present disclosure is wherein it is intended to diagnose the service timing of high-voltage, high-capacity vehicle batteries beforehand and prevent the occurrence of serious safety issues. To this end, vehicle battery life diagnosis and life prediction must be performed accurately. A conventional battery mounted on the vehicle is not separated from the vehicle until the vehicle is scrapped, insofar as it does not severely impact vehicle operation, making it impossible to check the status of its deterioration. Therefore, technology for diagnosing the life and status of an electric vehicle battery, which is directly included in the performance, reliability and safety of an electric vehicle, is important. Conventional methods for diagnosing battery life and status, apply partial data from a battery management system based on the lithium battery degradation model to perform estimates. However, battery life diagnosis using partial battery management system data based on a battery degradation model has a problem of low accuracy. Therefore, in the present disclosure, to diagnose and predict vehicle battery life, change in vehicle battery state of charge and temperature data measured in a measuring unit10and battery relaxation voltage calculated in a calculating unit20are used to diagnose battery life and status in a diagnosing unit30. Specifically, the battery measuring unit10is connected to a battery50of a vehicle, and may be provided at various points inside the vehicle. The calculating unit20of the present disclosure calculates a relaxation voltage when the supply of electrical power to a power line through which electrical power of the battery is supplied is cut off Because high current flows from a high voltage battery, the electrical connection is cut off when not being used. Relaxation voltage refers to the change in residual voltage at either terminal of a battery when electric connection to the battery has been cut off. In the case of the present disclosure, in order to measure such relaxation voltage, the initial voltage at the battery output terminal when the battery power line is cut off may be measured, and after a certain time the increased voltage at the battery output terminal may be measured, and the relaxation voltage may be calculated from the difference between the initial voltage and the increased voltage. Finally, using the battery relaxation voltage calculated here and the change in state of charge and temperature measured at the measuring unit10, the diagnosing unit30diagnoses the life and status of the battery. Calculating relaxation voltage from the difference between initial voltage and increased voltage using the fact that internal resistance increases with battery degradation is able to improve accuracy of battery life diagnosis over that of conventional battery degradation model-based diagnosis using partial battery management system data. The life of the battery is diagnosed only after change in level or charge and battery temperature data is measured at the measuring unit10, and calculation of battery relaxation voltage has been completed. That is, battery life diagnosis and life prediction occur only after measurement at the measuring unit10and measurement of relaxation voltage at the measuring unit20have occurred. Note that the charge state of the battery, state of charge, may be abbreviated as SOC, and the life of the battery, state of health, may be abbreviated as SOH. The purpose of accurate battery SOH diagnosis is to accurately diagnose when the battery requires servicing, and to inform customers and manufacturers beforehand of any serious safety issues so that customer inconvenience and accidents can be prevented. The measuring unit10may measure a change in a battery's state of charge between a battery use start time point and a battery use end time point. The measuring unit10measures change in the state of charge of the battery, and the change in state of charge of the battery is measured between a battery use start time point and a battery use end time point. The current state of charge of the vehicle battery is measured, where the state of charge at full charge is 100% and 0% when the battery is completely discharged. Use of the vehicle battery begins at the time point when vehicle operation begins to obtain driving power, and use of the vehicle battery ends at the time point when operation of the vehicle is completed and the vehicle comes to a stop. Therefore, by using the measured state of charge at the battery use start time point and the battery use end time point, and the SOH map to be described later, it is possible to predict the life (state of health) of a vehicle battery and diagnose the life and status of the battery. The battery use start time point and the battery use end time point may be measured based on time points when the vehicle transmission lever is shifted. In a case where change in state of charge of a vehicle battery is measured based on a battery use start time point and a battery use end time point, the battery use start time point and battery use end time point can be determined based on whether or not the transmission lever of the vehicle has been shifted. This is because the time points at which the transmission lever of the vehicle is shifted normally coincide with the battery use start and end time points. Therefore, when the transmission lever of the vehicle is placed at “D”, this is recognized as a driving mode and the time point when battery use begins, and the state of charge (SOC) of the battery is measured here. When the transmission lever of the vehicle is placed at “P”, this is recognized as the time point when battery use ends, and the state of charge (SOC) of the battery is measured here. Using the measured values and an SOH map, the state of health of the vehicle battery may be predicted and the life and status of the battery may be diagnosed. FIG.2is a graph illustrating relaxation voltage in the method for diagnosing battery life according to one embodiment of the present disclosure. The calculating unit20may calculate a battery relaxation voltage using the difference between an initial voltage at the battery output terminal when the battery power line is cut off, and an increased voltage at the battery output terminal after a certain time delay. Referring toFIG.2, with respect to battery relaxation voltage, when the power line of the vehicle battery is cut off, battery voltage rises automatically and converges upon a certain value. Here, the change in voltage from the time point the power line of the vehicle battery is cut off through the BMS (battery management system) of the vehicle to the time point when battery voltage has risen is referred to as the relaxation voltage. The time point when vehicle operation begins can be seen as the battery use start time point, and the voltage measured during vehicle operation with the battery cathode and anode under load is called the closed circuit voltage, while the voltage measured when the vehicle is not in operation and the battery cathode and anode are not under load is called the open circuit voltage. Because the time point when the power line of the vehicle battery is cut off is the time point when the vehicle comes to a stop after operation, movement from the closed circuit voltage (CCV) to the open circuit voltage (OCV) at the time point the vehicle comes to a stop is monitored. Accordingly, the difference between the last open circuit voltage (OCV) at the battery use end time point and the closed circuit voltage (CCV) at the time point the vehicle comes to a stop is the battery relaxation voltage. Ultimately, as the closed circuit voltage (CCV) at the time point the vehicle comes to a stop is the initial voltage at the battery output terminal when the battery power line is cut off, and the final open circuit voltage (OCV) at the battery use end time point is the increased voltage at the battery output terminal after a certain time, the difference between the initial voltage and the increased voltage can be used to calculate the relaxation voltage of the battery. The diagnosing unit30is able to diagnose the current life and status of the battery using the position of the currently measured battery relaxation voltage between a minimum value which corresponds to the relaxation voltage of the battery in an initial state and a maximum value which corresponds to the relaxation voltage of the battery at the end of its life. As the lithium ion battery of a vehicle cannot be detached from the vehicle, the battery remains attached to the vehicle during use. Here, as internal resistance of the battery gradually increases with battery use, the voltage variability in the relaxation voltage of the battery increases as the remaining life of the battery decreases. As the relaxation voltage of the battery to be diagnosed lies between the relaxation voltage of an initial state battery and the relaxation voltage of a battery at the end of its life, the ratio between the remaining life had by an initial state battery and the remaining life of a battery at the end of its life can be used to diagnose the life and status of the battery to be diagnosed. By determining that the remaining life of the battery to be diagnosed is shorter the closer its relaxation voltage is to the relaxation voltage of a battery at the end of its life, and that the remaining life of the battery to be diagnosed is longer the closer its relaxation voltage is to the relaxation voltage of an initial state battery, the life and status of the battery to be diagnosed can be diagnosed. FIG.4is a table showing the SOH (State Of Health) map for a system for diagnosing battery life, whose inputs are changes in battery state of charge, temperature data and relaxation voltage, and whose output is battery life and status. The diagnosing unit30may be equipped with a battery SOH (State Of Health) map whose inputs are changes in battery state of charge, temperature data and relaxation voltage, and whose output is battery life and status, and may diagnose battery life and status using this battery SOH (State Of Health) map. Referring toFIG.4, a battery SOH map includes a multiplicity of tables such as that ofFIG.4in which the battery relaxation voltage of an initial state battery and experimentally determined battery relaxation voltages at different degrees of battery degradation. As a battery degrades, internal resistance of the battery increases, and using this characteristic, if charging or discharging is interrupted for a certain time during constant current charging or discharging, the battery voltage drops or rises toward the OCV. The relaxation voltage is defined as the size of this change in voltage. To find out the life and status of a battery to be diagnosed, the change in vehicle battery state of charge as measured by the measuring unit10at the battery use start time point and the battery use end time point and the initial battery temperature are found on the SOH map. The reason for measuring change in battery charge and temperature in the present disclosure is to perform battery life diagnosis and prediction under identical conditions by measuring the temperature of the vehicle battery prior to operation, and to ascertain the state of charge of a vehicle battery before use and when use has ended to calculate its relaxation voltage, thereby accurately diagnose and measure the remaining life of the battery. Then, the relaxation voltage of the battery to be diagnosed is found by finding the difference between the initial voltage at the output terminal of the battery when the battery power line is cut off, and the increased voltage at the output terminal of the battery after a certain time, and then finding the corresponding battery relaxation voltage in the battery SOH map. Here, using the corresponding relaxation voltage found in the battery SOH map, the corresponding degree of battery degradation in the top left corner of the table can be found. Using this degree of battery degradation, the life and status of the battery to be diagnosed can be determined. The diagnosing unit30may, in a case where the relaxation voltage of the battery as calculated by the calculating unit20is greater than the relaxation voltage at the end of its life, diagnose the status of the battery as faulty. As for the method for diagnosing the status of a battery as faulty, the internal resistance of the battery gradually increases as the battery is used, and therefore the size of the change in voltage in the relaxation voltage grows gradually as battery life decreases. The relaxation voltage of the battery to be diagnosed is located between the relaxation voltage of an initial state battery and the relaxation voltage of a battery at the end of its life, and in a case where the battery relaxation voltage calculated for the battery to be diagnosed is greater than the relaxation voltage of a battery at the end of its life, it may be determined that the degradation of the battery to be diagnosed is greater than that of a battery at the end of its life, and the status of the battery to be diagnosed may be diagnosed as faulty. FIG.3is a flow chart illustrating the system for diagnosing battery life according to one embodiment of the present disclosure. Referring toFIG.3, the method for diagnosing battery life of a vehicle battery according to one embodiment of the present disclosure is comprised of measuring change in state of charge and temperature of a vehicle battery in a measuring unit10at S30and S80, cutting off a battery power line and calculating a battery relaxation voltage in a calculating unit20at S110, and using the change in state of charge and temperature of a vehicle battery and the calculated battery relaxation voltage to diagnose battery life and status in a diagnosing unit30at S120. Prior to measuring change in state of charge and temperature of a vehicle battery in a measuring unit10at S30, the method may further comprise determining whether or not a driver's brake pedal signal is turned on or an ignition signal is turned on at S20. In a case where a driver's brake pedal signal is turned off and the ignition signal is turned off in determining whether or not a driver's brake pedal signal is turned on or an ignition signal is turned on at S20, the method may further comprise not measuring change in state of charge and temperature of the vehicle battery or resetting a measured value at S10. Measuring change in state of charge and temperature of a vehicle battery (S30, S80) may include measuring change in vehicle battery state of charge between a battery use start time point and a battery use end time point. The battery use start time point and battery use end time point may be measured based on a vehicle transmission lever shift time point at S40. After measuring based on a vehicle transmission lever shift time point at S40, the method may further comprise vehicle operation at S50. After vehicle operation at S50and prior to measuring change in vehicle battery state of charge and temperature at S80, the method may further comprise determining whether or not a driver's brake pedal signal is turned on and the transmission is switched to parking mode, or whether or not an ignition button is turned off at S60. In a case where, in determining whether or not a driver's brake pedal signal is turned on and the transmission is switched to parking mode, or whether or not an ignition button is turned off at S60, it is determined that the driver's brake pedal is turned off, that the transmission has not been switched to parking mode, or that the ignition button is turned on, the method may further comprise not measuring change in vehicle battery state of charge and temperature at S70. After determining whether or not a driver's brake pedal signal is turned on and the transmission is switched to parking mode, or whether or not an ignition button is turned off at S60and prior to calculating battery relaxation voltage at S110, the method may further include determining whether or not a charge cable has been connected to the vehicle within 5 minutes at S90. In a case where, in determining whether or not a charge cable has been connected to the vehicle within 5 minutes at S90, a charge cable has been connected to the vehicle within 5 minutes, the method may further comprise transitioning to a vehicle battery charge diagnosis mode at S100. Calculating battery relaxation voltage at S110may include measuring an initial voltage at a battery output terminal when a battery power line is cut off, measuring increased voltage at the battery output terminal after a certain time, then calculating battery relaxation voltage using the difference between initial voltage and increased voltage. Diagnosing battery life and status at S120may include using a position of the relaxation voltage of the currently measured battery between a minimum value corresponding to a relaxation voltage of an initial state battery and a maximum value corresponding to a relaxation voltage of a battery at the end of its life to diagnose the life and status of the current battery. Diagnosing battery life and status at S120may further include diagnosing battery life and status using a battery SOH (state of health) map whose inputs are changes in battery state of charge, temperature data and relaxation voltage, and whose output is battery life and status. Diagnosing battery life and status at S120may further include, in a case where the battery relaxation voltage calculated at the calculating unit is greater than the relaxation voltage of a battery at the end of its life, diagnosing the battery as faulty. Whereas specific embodiments of the present disclosure have been illustrated and described in the foregoing, it shall be self-evident to a person having ordinary skill in the art that the present disclosure may be modified and changed in various ways without departing from the technical idea of the present disclosure as provided in the appended claims.
20,318
11860238
DETAILED DESCRIPTION The following disclosure provides many different embodiments, or examples, for implementing different features of the disclosure. Specific examples of components and arrangements are described below to simplify the present disclosure. These are, of course, merely examples and are not intended to be limiting. For example, the formation of a first feature over or on a second feature in the description that follows may include embodiments in which the first and second features are formed in direct contact, and may also include embodiments in which additional features may be formed between the first and second features, such that the first and second features may not be in direct contact. In addition, the present disclosure may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed. Further, spatially relative terms, such as “beneath,” “below,” “lower,” “above,” “upper” and the like, may be used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. The spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. The apparatus may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein may likewise be interpreted accordingly. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the disclosure are approximations, the numerical values set forth in the specific examples are reported as precisely as possible. Any numerical value, however, inherently contains certain errors necessarily resulting from the standard deviation found in the respective testing measurements. Also, as used herein, the term “about” generally means within 10%, 5%, 1%, or 0.5% of a given value or range. Alternatively, the term “about” means within an acceptable standard error of the mean when considered by one of ordinary skill in the art. Other than in the operating/working examples, or unless otherwise expressly specified, all of the numerical ranges, amounts, values and percentages such as those for quantities of materials, durations of times, temperatures, operating conditions, ratios of amounts, and the likes thereof disclosed herein should be understood as modified in all instances by the term “about.” Accordingly, unless indicated to the contrary, the numerical parameters set forth in the present disclosure and attached claims are approximations that can vary as desired. At the very least, each numerical parameter should at least be construed in light of the number of reported significant digits and by applying ordinary rounding techniques. Ranges can be expressed herein as from one endpoint to another endpoint or between two endpoints. All ranges disclosed herein are inclusive of the endpoints, unless specified otherwise. Referring to the figures, wherein like numerals indicate like parts throughout the several views.FIG.1illustrates a circuit for parameter PSRR measurement in accordance with some embodiments of the present disclosure. Referring toFIG.1, a circuit10for parameter PSRR (Power Supply Rejection Ratio) measurement includes a filter11, a first regulator12and a second regulator13. The circuit10may be configured to achieve PSRR measurement, and may be implemented as “on-die parameter” (ODP) measurement. In accordance with some embodiments of the present disclosure, the filter11has an AC signal input terminal111, a DC signal input terminal112and a combined signal output terminal113. The AC signal input terminal111may be configured for receiving an AC input signal VINA. The DC signal input terminal112may be configured for receiving a DC input signal VIND. The combined signal output terminal113may be configured for outputting a combined output signal according to the AC input signal114and the DC input signal Voutc. In accordance with some embodiments of the present disclosure, the first regulator12has a first input terminal121and a first output terminal122. The first input terminal121may be coupled to the combined signal output terminal113, and may be configured for receiving the combined output signal Voutc. Thus, the combined output signal Voutcmay be inputted to the first regulator12. The first output terminal122may be configured for outputting a first output signal Vout1. The first output signal Vout1has a first AC component signal VAC1and a first DC component signal VDC1. The first output signal Vout1may be obtained according to the combined output signal Voutc. Therefore, the first output signal Vout1may be adjusted according to the combined output signal Voutc, for example, the first AC component signal VAC1may be adjusted according to the AC input signal VINA, and the first DC component signal VDC1may be adjusted according to the DC input signal VIND. In accordance with some embodiments of the present disclosure, the first regulator12may receive a DC power signal VIN1. Therefore, the first output signal Vout1may be adjusted according to the combined output signal Voutcand DC power signal VIN1, for example, the first AC component signal VAC1may be adjusted according to the AC input signal VINA, and the first DC component signal VDC1may be adjusted according to the DC input signal VINDand the DC power signal VIN1. In accordance with some embodiments of the present disclosure, a first probe14may be configured to measure the first output signal Vout1. Furthermore, the first AC component signal VAC1of the first output signal Vout1and the first DC component signal VDC1of the first output signal Vout1may be separated from the first output signal Vout1, or the first AC component signal VAC1of the first output signal Vout1and the first DC component signal VDC1of the first output signal Vout1may be obtained from the first output signal Vout1. Therefore, the first output signal Vout1may be monitored and quantified by the first probe14. In accordance with some embodiments of the present disclosure, the second regulator13has a second input terminal131and a second output terminal132. The second input terminal131may be coupled to the first output terminal122, and may be configured for receiving the first output signal Vout1. Thus, the first output signal Vout1may be inputted to the second regulator13. The second output terminal132may be configured for outputting a second output signal Vout2. The second output signal Vout2has a second AC component signal VAC2and a second DC component signal VDC2. The second output signal Vout2may be obtained according to the first output signal Vout1. Therefore, the second output signal Vout2may be adjusted according to the first output signal Vout1, for example, the second AC component signal VAC2may be adjusted according to the first AC component signal VAC1, and the second DC component signal VDC2may be adjusted according to the first DC component signal VDC1. In accordance with some embodiments of the present disclosure, a second probe15may be configured to measure the second output signal Vout2, and the second AC component signal VAC2of the second output signal Vout2and the second DC component signal VDC2of the second output signal Vout2may be separated from the second output signal Vout2, or the second AC component signal VAC2of the second output signal Vout2and the second DC component signal VDC2of the second output signal Vout2may be obtained from the second output signal Vout2. Therefore, the second output signal Vout2may be monitored and quantified by the second probe15. In accordance with some embodiments of the present disclosure, a parameter PSRR of the second regulator13may be obtained according to the first AC component signal VAC1and the second AC component signal VAC2. That is, the PSRR of the second regulator13may be expressed as: PSRR=20*log(VAC2/VAC1). In accordance with some embodiments of the present disclosure, the first regulator12may be configured to combine the first AC component signal VAC1and the first DC component signal VDC1and to provide the signals to a DC power signal terminal (the second input terminal131) of the second regulator13. Therefore, the first regulator12may support high output current and prevent loading to the second regulator13. Furthermore, no redesign required, since both the first regulator12and the second regulator13may have the same design. Besides, reliability issue for the first regulator12may be negligible in case it is core device design. FIG.2illustrates a circuit for parameter PSRR measurement in accordance with some embodiments of the present disclosure. Referring toFIG.2, a circuit20for parameter PSRR measurement includes a filter21, a first regulator12and a second regulator13. In accordance with some embodiments of the present disclosure, the filter21has an AC signal input terminal211, a DC signal input terminal212and a combined signal output terminal213. The AC signal input terminal211may be configured for receiving an AC input signal VINA. The DC signal input terminal212may be configured for receiving a DC input signal VIND. The combined signal output terminal213may be configured for outputting a combined output signal Voutcaccording to the AC input signal VINAand the DC input signal VIND. In accordance with some embodiments of the present disclosure, the filter21includes a resister216and a capacitor217. The resister216may be coupled to the capacitor217, the combined signal output terminal213and the DC signal input terminal212. That is, one end of the resister216may be coupled to the DC signal input terminal212, and the other end of the resister216may be coupled to the capacitor217and the combined signal output terminal213. The capacitor217may be coupled to the resister216, the combined signal output terminal213and the AC signal input terminal211. That is, one end of the capacitor217may be coupled to the AC signal input terminal211, and the other end of the capacitor217may be coupled to the resister216and the combined signal output terminal213. In accordance with some embodiments of the present disclosure, since the resister216and the capacitor217occupy much smaller area of the chip than other on-chip elements, the resister216and the capacitor217may be configured to implement on-chip filter. Furthermore, a cut-off frequency of the filter21may be low, for example, hundreds of Hz. FIG.3illustrates a circuit for parameter PSRR measurement in accordance with some embodiments of the present disclosure. Referring toFIG.3, a circuit30for parameter PSRR measurement includes a filter31, a first regulator12and a second regulator13. In accordance with some embodiments of the present disclosure, the filter31has an AC signal input terminal311, a DC signal input terminal312and a combined signal output terminal313. The AC signal input terminal311may be configured for receiving an AC input signal VINA. The DC signal input terminal312may be configured for receiving a DC input signal VIND. The combined signal output terminal313may be configured for outputting a combined output signal Voutcaccording to the AC input signal VINAand the DC input signal VIND. In accordance with some embodiments of the present disclosure, the filter31includes a plurality of switches316,317and a plurality of capacitors318,319. The switches316,317may be coupled to the capacitors318,319, the DC signal input terminal312and the combined signal output terminal313. The capacitors318,319may be coupled to the switches316,317, the AC signal input terminal311and the combined signal output terminal313. In accordance with some embodiments of the present disclosure, the filter31has a first switch316, a second switch317, a first capacitor318and a second capacitor319. The first switch316may be coupled to the first capacitor318, the second switch317, and the DC signal input terminal312. In other words, one end of the first switch316may be coupled to the DC signal input terminal312, and the other end of the first switch316may be coupled to the first capacitor318and the second switch317. The second switch317may be coupled to the first capacitor318, the second capacitor319, the first switch316and the combined signal output terminal313. In other words, one end of the second switch317may be coupled to the first switch316and the first capacitor318, and the other end of the second switch317may be coupled to the second capacitor319and the combined signal output terminal313. The first capacitor318may be coupled to the first switch316, the second switch317, the second capacitor319and the AC signal input terminal311. In other words, one end of the first capacitor318may be coupled to the first switch316and the second switch317, and the other end of the first capacitor318may be coupled to the second capacitor319and the AC signal input terminal311. The second capacitor319may be coupled to the second switch317, the first capacitor318, the combined signal output terminal313and the AC signal input terminal311. In other words, one end of the second capacitor319may be coupled to the second switch317and the combined signal output terminal313, and the other end of the second capacitor319may be coupled to the first capacitor318and the AC signal input terminal311. In accordance with some embodiments of the present disclosure, the filter31may be an on-chip switch-cap filter, and the switches316,317and the capacitors318and319occupy much smaller area of the chip than other on-chip elements. FIG.4illustrates a circuit for parameter PSRR measurement in accordance with some embodiments of the present disclosure. Referring toFIG.4, a circuit40for parameter PSRR measurement includes a filter41, a first regulator12and a second regulator13. In accordance with some embodiments of the present disclosure, the filter41has an AC signal input terminal411, a DC signal input terminal412and a combined signal output terminal413. The AC signal input terminal411may be configured for receiving an AC input signal VINA. The DC signal input terminal412may be configured for receiving a DC input signal VIND. The combined signal output terminal413may be configured for outputting a combined output signal Voutcaccording to the AC input signal VINAand the DC input signal VIND. In accordance with some embodiments of the present disclosure, the filter41includes a first resister416, a second resister417and a capacitor418. The first resister416may be coupled to the capacitor418, the second resister417, the combined signal output terminal413and the DC signal input terminal412. In other words, one end of the first resister416may be coupled to the DC signal input terminal412, and the other end of the first resister416may be coupled to the capacitor418, the second resister417and the combined signal output terminal413. The second resister417may be coupled to the capacitor418, the first resister416, the combined signal output terminal413and a ground. In other words, one end of the second resister417may be coupled to the capacitor418, the first resister416and the combined signal output terminal413, and the other end of the second resister417may be coupled to the ground. The capacitor418may be coupled to the first resister416, the second resister417, the combined signal output terminal413and the AC signal input terminal411. In other words, one end of the capacitor418may be coupled to the AC signal input terminal411, and the other end of the capacitor418may be coupled to the first resister416, the second resister417and the combined signal output terminal413. In accordance with some embodiments of the present disclosure, since the resisters416,417and the capacitor418occupy much smaller area of the chip than other on-chip elements, the resisters416,417and the capacitor418may be configured to implement on-chip filter. Furthermore, the resisters416,417and the capacitor418may be configured to implement off-chip filter. FIG.5illustrates a circuit for parameter PSRR measurement in accordance with some embodiments of the present disclosure. Referring toFIG.5, a circuit50for parameter PSRR measurement includes a filter51, a first regulator12and a second regulator13. In accordance with some embodiments of the present disclosure, the filter51has an AC signal input terminal511, a DC signal input terminal512and a combined signal output terminal513. The AC signal input terminal511may be configured for receiving an AC input signal VINA. The DC signal input terminal512may be configured for receiving a DC input signal VIND. The combined signal output terminal513may be configured for outputting a combined output signal Voutcaccording to the AC input signal VINAand the DC input signal VIND. In accordance with some embodiments of the present disclosure, the filter51includes an inductor516and a capacitor517. The inductor516may be coupled to the capacitor517, the combined signal output terminal513and the DC signal input terminal512. That is, one end of the inductor516may be coupled to the DC signal input terminal512, and the other end of the inductor516may be coupled to the capacitor517and the combined signal output terminal513. The capacitor517may be coupled to the inductor516, the combined signal output terminal513and the AC signal input terminal511. That is, one end of the capacitor517may be coupled to the AC signal input terminal511, and the other end of the capacitor517may be coupled to the inductor516and the combined signal output terminal513. In accordance with some embodiments of the present disclosure, the inductor516and the capacitor517may be configured to implement on-chip filter. Furthermore, the inductor516and the capacitor517may be configured to implement off-chip filter. FIG.6illustrates a regulator in accordance with some embodiments of the present disclosure. Referring toFIG.1andFIG.6, the first regulator12has a first input terminal121and a first output terminal122. Furthermore, the first regulator12may include an operational amplifier123, a transistor124and resisters125,126. In accordance with some embodiments of the present disclosure, the operational amplifier123drives the transistor124with more current if the voltage at its inverting input terminal drops below the output of the voltage reference at the first input terminal121(the non-inverting input terminal). The resisters125,126may be configured to adjust the first output signal Vout1. In accordance with some embodiments of the present disclosure, the first regulator12may be a linear regulator, a switching regulator, linear voltage regulator (LVR), a low drop-out regulator (LDO). Furthermore, in accordance with some embodiments of the present disclosure, the second regulator13may be a linear regulator, a switching regulator, linear voltage regulator (LVR), a low drop-out regulator (LDO). The first regulator12may be the same as the second regulator13. FIG.7illustrates a semiconductor device for parameter PSRR measurement in accordance with some embodiments of the present disclosure.FIG.8illustrates a function unit for parameter PSRR measurement in accordance with some embodiments of the present disclosure. Referring toFIG.7andFIG.8, a semiconductor device70for parameter PSRR measurement includes a function unit71and a digital control unit72. The function unit71has at least one function block711,712and a parameter PSRR measurement block713. The parameter PSRR measurement block713may be easily incorporated within on-die parameter (ODP) circuit for product level monitoring. Furthermore, the parameter PSRR measurement block713may incorporate various function blocks (MOS Id/Vth monitoring, passives, RO, analog building blocks) to form the function unit71. In accordance with some embodiments of the present disclosure, the parameter PSRR measurement block713includes a filter716, a first regulator717and a second regulator718. Referring toFIG.1andFIG.8, the filter716inFIG.8may be the same as the filter11inFIG.1, the first regulator717inFIG.8may be the same as the first regulator12inFIG.1, and the second regulator718inFIG.8may be the same as the second regulator13inFIG.1. The filter716may be configured for receiving an AC input signal VINAand a DC input signal VIND, and for outputting a combined output signal Voutcaccording to the AC input signal VINAand the DC input signal VIND. The first regulator717may be coupled to the filter716, and may be configured for receiving the combined output signal Voutc, and for outputting a first output signal Vout1. The first output signal Vout1has a first AC component signal VAC1and a first DC component signal VDC1. The first output signal Vout1may be obtained according to the combined output signal Voutc. Therefore, the first output signal Vout1may be adjusted according to the combined output signal Voutc, for example, the first AC component signal VAC1may be adjusted according to the AC input signal VINA, and the first DC component signal VDC1may be adjusted according to the DC input signal VIND. In accordance with some embodiments of the present disclosure, the second regulator718may be coupled to the first regulator717, and may be configured for receiving the first output signal Vout1, and for outputting a second output signal Vout2. The second output signal Vout2has a second AC component signal VAC2and a second DC component signal VDC2. The second output signal Vout2may be obtained according to the first output signal Vout1. Therefore, the second output signal Vout2may be adjusted according to the first output signal Vout1, for example, the second AC component signal VAC2may be adjusted according to the first AC component signal VAC1, and the second DC component signal VDC2may be adjusted according to the first DC component signal VDC1. A parameter PSRR of the second regulator718may be obtained according to the first AC component signal VAC1and the second AC component signal VAC2. In accordance with some embodiments of the present disclosure, the semiconductor device70further includes a first output pin73and a second output pin74. The first output pin may be configured for outputting the first output signal Vout1, and the second output pin74may be configured for outputting the second output signal Vout2. In accordance with some embodiments of the present disclosure, the first probe14may be configured to measure the first output signal Vout1, and the second probe15may be configured to measure the second output signal Vout2. In accordance with some embodiments of the present disclosure, the digital control unit72may be coupled to the function unit71, and may be configured for selecting at least one function block711,712or the parameter PSRR measurement block713. The semiconductor device70further includes a selecting pin75for selecting at least one function block711,712or the parameter PSRR measurement block713. The digital control unit72may be a multiplexer. Therefore, the required block may be selected by the digital control unit72, and a predetermined measurement SOP may be followed using automation for measuring all dies. In accordance with some embodiments of the present disclosure, referring toFIG.2andFIG.8, the filter716inFIG.8may be the same as the filter21inFIG.2. The filter716may include a resister and a capacitor, the resister may be coupled to the capacitor. In accordance with some embodiments of the present disclosure, referring toFIG.3andFIG.8, the filter716inFIG.8may be the same as the filter31inFIG.2. The filter716may include a plurality of switches and a plurality of capacitors, the switches may be coupled to the capacitors. The filter716may include a first switch, a second switch, a first capacitor and a second capacitor. The first switch may be coupled to the first capacitor, and the second switch. The second switch may be coupled to the first capacitor, the second capacitor, and the first switch. The first capacitor may be coupled to the first switch, the second switch, and the second capacitor. The second capacitor may be coupled to the second switch, and the first capacitor. In accordance with some embodiments of the present disclosure, referring toFIG.4andFIG.8, the filter716inFIG.8may be the same as the filter41inFIG.2. The filter716may include a first resister, a second resister and a capacitor. The first resister may be coupled to the capacitor, and the second resister. The second resister may be coupled to the capacitor, and the first resister. The capacitor may be coupled to the first resister, the second resister. In accordance with some embodiments of the present disclosure, referring toFIG.5andFIG.8, the filter716inFIG.8may be the same as the filter51inFIG.2. The filter716may include an inductor and a capacitor, the inductor may be coupled to the capacitor. In accordance with some embodiments of the present disclosure, the semiconductor device70may be able to implement on-die parameter PSRR measurement without external components or the conventional line injector circuit. Therefore, problems, for example de-embedding noise from PCB and testing environment, arising due to external components may be mitigated. Furthermore, using two regulators may minimize additional design efforts required by circuit designer, and programmable & fast measurements may be done for all dies. Besides, easy debugging for regulators may be performed in testchip analog IPs. FIG.9is a flow diagram showing a method for parameter PSRR measurement in accordance with some embodiments of the present disclosure. Referring toFIG.1andFIG.9, in step S91, an AC input signal VINAand a DC input signal VINDare inputting to a filter11. In step S92, a combined output signal Voutcof the filter11is outputting to a first regulator12, and the combined output signal Voutcmay be obtained according to the AC input signal VINAand the DC input signal VIND. In step S93, a first output signal Vout1of the first regulator12is measuring, the first output signal Vout1has a first AC component signal VAC1and a first DC component signal VDC1, and the first output signal Vout1may be obtained according to the combined output signal Voutc. In step S94, the first output signal Vout1of the first regulator12is outputting to a second regulator13. In step S95, a second output signal Vout2of the second regulator13is measuring, the second output signal Vout2has a second AC component signal VAC2and a second DC component signal VDC2, and the second output signal Vout2may be obtained according to the first output signal Vout1. In step S96, a parameter PSRR of the second regulator13is calculated according to the first AC component signal VAC1and the second AC component signal VAC2. In step S97, a predetermined frequency of the AC input signal may be varied, and the above steps may be repeated to calculate the parameter PSRR for desired frequency range. Therefore, the method for parameter PSRR measurement may enable wide-frequency range measurement. Furthermore, the method for parameter PSRR measurement may save testing resources and is a high accuracy monitoring method for process margins in regulators. In accordance with some embodiments of the present disclosure, the method for parameter PSRR measurement may further include a step of inputting a DC power signal VIN1to the first regulator12, the first DC component signal VDC1of the first output signal Vout1may be obtained according to the DC input signal VIN1and the DC power signal VDC1. In some embodiments, a circuit for parameter PSRR measurement is disclosed, including: a filter, a first regulator and a second regulator. The filter has an AC signal input terminal, a DC signal input terminal and a combined signal output terminal. The AC signal input terminal may be configured for receiving an AC input signal. The DC signal input terminal may be configured for receiving a DC input signal. The combined signal output terminal may be configured for outputting a combined output signal according to the AC input signal and the DC input signal. The first regulator has a first input terminal and a first output terminal. The first input terminal may be coupled to the combined signal output terminal and may be configured for receiving the combined output signal. The first output terminal may be configured for outputting a first output signal. The first output signal has a first AC component signal and a first DC component signal. The first output signal may be obtained according to the combined output signal. The second regulator has a second input terminal and a second output terminal. The second input terminal may be coupled to the first output terminal and may be configured for receiving the first output signal. The second output terminal may be configured for outputting a second output signal. The second output signal has a second AC component signal and a second DC component signal. The second output signal may be obtained according to the first output signal. A parameter PSRR of the second regulator may be obtained according to the first AC component signal and the second AC component signal. In some embodiments, a semiconductor device for parameter PSRR measurement is disclosed, including: a function unit and a digital control unit. The function unit has at least one function block and a parameter PSRR measurement block. The parameter PSRR measurement block includes a filter, a first regulator and a second regulator. The filter may be configured for receiving an AC input signal and a DC input signal, and may be configured for outputting a combined output signal according to the AC input signal and the DC input signal. The first regulator may be coupled to the filter and may be configured for receiving the combined output signal, and may be configured for outputting a first output signal. The first output signal has a first AC component signal and a first DC component signal. The first output signal may be obtained according to the combined output signal. The second regulator may be coupled to the first regulator and may be configured for receiving the first output signal, and may be configured for outputting a second output signal. The second output signal has a second AC component signal and a second DC component signal. The second output signal may be obtained according to the first output signal. A parameter PSRR of the second regulator may be obtained according to the first AC component signal and the second AC component signal. The digital control unit may be coupled to the function unit, and may be configured for selecting at least one function block or the parameter PSRR measurement block. In some embodiments, a method for parameter PSRR measurement is disclosed, including: inputting an AC input signal and a DC input signal to a filter; outputting a combined output signal of the filter to a first regulator, the combined output signal obtained according to the AC input signal and the DC input signal; measuring a first output signal of the first regulator, the first output signal having a first AC component signal and a first DC component signal, the first output signal obtained according to the combined output signal; outputting the first output signal of the first regulator to a second regulator; measuring a second output signal of the second regulator, the second output signal having a second AC component signal and a second DC component signal, the second output signal obtained according to the first output signal; and calculating a parameter PSRR of the second regulator according to the first AC component signal and the second AC component signal.
31,842
11860239
In the drawings, reference numbers may be reused to identify similar and/or identical elements. DETAILED DESCRIPTION While the foregoing disclosure describes an example of systems and methods for diagnosing faults in a power inverter module for an electric vehicle (EV), the systems and methods can be used to diagnose faults in power inverter modules in other types of vehicles or non-vehicular applications. Referring now toFIG.1, a controller10for a power inverter module11is shown. The power inverter module11includes a power inverter12including components such as power switches (PS)14and diodes15. In some examples, the power switches14include insulated gate bipolar transistors (IGBTs), although other types of power switches can be used. The power inverter module11optionally includes phase current sensors16to sense current in each phase leg, and temperature sensors19to sense a temperature of each phase leg. In some examples, the temperature sensors19include thermistors. In some examples, the thermistors include negative temperature coefficient (NTC) thermistors, although other types of thermistors or temperature sensors can be used. When a fault occurs in the power inverter module, manufacturers typically replace the entire power inverter module rather than diagnose faults in individual components. In many situations, the fault may have occurred in a temperature sensor such as a thermistor. Repairs can be made by replacing the faulty thermistor, which is far less costly than replacing the entire power inverter module. Referring now toFIGS.2A and2B, while specific examples of power inverters12are shown, other types of power inverters can be used. InFIG.2A, the power inverter12includes a power switch integrated circuit (IC)20connected by solder22and/or bond wires24to a metal layer32such as copper or another suitable metal material. A diode IC26is connected by solder and/or bond wires30to the metal layer32. A ceramic layer34is arranged between the metal layer32and another metal layer42. The metal layer42is connected by solder46to a copper baseplate52. A thermal conducting layer or material54such as thermal grease is arranged between the copper baseplate52and a heatsink56. During operation, a temperature of a junction of the power switch IC20and the diode IC26are estimated and monitored. While a specific layout is shown for purposes of illustration, the power inverter12can have other configurations. InFIG.2B, an example of one phase leg of the power inverter12includes a power switch PS1including a gate, a first terminal, and a second terminal. The first terminal of the power switch PS1is connected to a positive terminal of the battery. The second terminal of the power switch PS1is connected to a first terminal of the power switch PS2. A second terminal of the power switch PS2is connected to a negative terminal of the battery. The controller10controls switching of the power switches PS1and PS2by sending signals to the gates of the power switches PS1and PS2. Diode D1is connected anti-parallel to the first and second terminals of the power switch PS1. Diode D2is connected anti-parallel to the first and second terminals of the power switch PS2. An output of the phase leg is connected to a phase stator winding. A temperature sensor such as a thermistor T1is arranged to sense a temperature of the phase leg (for example, in proximity to the diode and the power switch). Referring now toFIG.3, the controller10includes a fault detection module110. The fault detection module110includes a power loss estimating module124, a junction temperature estimating module120, a health indicator generating module134and a fault isolation module140. The power loss estimating module124estimates the power loss in the power inverter based on vehicle operating conditions. In some examples, the controller10is connected to a controller area network (CAN) bus or other interface to receive vehicle operating condition data from other vehicle controllers such as a motor controller, a battery monitoring system, etc. The junction temperature estimating module120estimates junction temperatures of the power switches and diodes based on vehicle operating parameters such as coolant temperature, power loss, thermal impedance, or other values. The health indicator generating module134compares the sensed temperatures from the power inverter module11and the junction temperatures of the power switches and diode and determines whether there is an inverter fault, a thermistor fault or no fault based on the comparison. Referring now toFIGS.4A and4B, examples of the power loss estimating modules124are shown. InFIG.4A, an example of the power loss estimating module124determines the conduction loss and the switching loss for the power switches (PG_condand PDG_sw) and the diodes (PD_condand PD_sw) based on vehicle operating conditions. In some examples, the vehicle operating conditions are selected from a group consisting of torque, speed, battery voltage VDC and/or other vehicle operating conditions. In some examples, the power loss estimating module124is implemented as an operational lookup table (LUT) that is indexed by the one or more vehicle operating conditions and outputs the conduction loss and the switching loss for the power switches (PG_condand PDG_sw) and the diodes (PD_condand PD_sw). InFIG.4B, another example of the power loss estimating module124is shown in further detail. The power loss estimating module124includes a calculating module210including a power factor and modulation index calculator214. The power factor and modulation index calculator214calculates a power factor ϕ and modulation index (MI) in response to phase voltage v*abcand a filtered switch current Is_filt. A voltage drop and energy loss interpolating module218calculates voltage drop Vdropand power loss Eswand Err. The calculating module210outputs the voltage drop Vdropand on resistance RON, the power loss due to the switch Eswand diode Err, the power factor ϕ and the modulation index (MI) to a conduction loss and switching loss calculator230. The conduction loss and switching loss calculator230calculates the conduction loss and the switching loss for the power switches (PG_condand PDG_sw) and the diodes (PD_condand PD_sw) based on the voltage drop Vdropand the on resistance RON, the power loss due to the switch Eswand diode Err, the power factor ϕ and the modulation index (MI). Referring now toFIGS.5A and5B, examples of the junction temperature estimating modules120are shown. InFIG.5A, the junction temperature estimating module120generate an estimated power switch junction temperature (TJ_PS) and/or an estimated diode junction temperature (TJ_D) in response to the coolant temperature Tcool, thermal impedance, power loss in the diode (Pdiode) and power loss in the power switch (Pps). InFIG.5B, another example of the junction temperature estimating module120is shown and includes adders320,322,350and354, multipliers324,326,332and336, and low pass filters (LPFs)340,342,344, and346. PG_condand PDG_sware input to the adder320. An output of the adder320is input to the multipliers324and332, which multiply the power sum (Pps) by a first constant Rggand a second constant Rgd, respectively. PD_condand PD_sware input to the adder322. An output of the adder322is input to the multipliers326and336, which multiply the power sum (Pdiode) by a first constant Rag and a second constant Rad, respectively. Outputs of the multipliers324and332are input to LPFs340and344, respectively. Outputs of the multipliers326and336are input to LPFs342and346, respectively. Outputs of the LPFs340and342and the coolant temperature are input to the adder350. The adder350outputs the estimated power switch junction temperature (TJ_PS). Outputs of the LPFs344and346and the coolant temperature are input to the adder354. The adder354outputs the estimated diode junction temperature (TJ_D). Referring now toFIGS.6A and6B, examples of health indicator generating modules408and458are shown. InFIG.6A, the health indicator generating module408includes a multiplier430that multiplies a sensed temperature TTby a constant λ. The health indicator generating module408includes a difference module440that generates a difference between the estimated power switch junction temperature TJ_PSand λ*TT. An absolute value module442generates an absolute value of the difference. In some examples, these calculations are made for each phase. InFIG.6B, the health indicator generating module458includes a multiplier480that multiplies the sensed temperature TTby the constant λ. The health indicator generating module458includes a difference module490that generates a difference between the estimated diode junction temperature TD_PSand λ*TT_D. An absolute value module442generates an absolute value of the difference. In some examples, these calculations are made for each phase leg. Referring now toFIGS.7and8, methods500and600for diagnosing faults in a power inverter module are shown. InFIG.7, a method500reads vehicle speed and torque at510. At518, the method determines whether the vehicle is in a stall condition. If518is false, the method calculates the health of the phase current sensors and the coolant sensors at520. At524, the method determines whether all of the current and coolant sensors are healthy based on the calculated health. If524is true, the temperature sensors for each of the phase legs are read at528. At532, the motor currents, voltage commands and switching strategy/frequency (and/or other vehicle operating parameters) are read. At536, the inverter loss is calculated. At538, the inverter junction temperature is estimated. At542, health indicators are calculated. At544, the method determines whether there is a fault. If542is true, the method generates an alert at546. InFIG.8, the method600allows differentiation between inverter faults (such as power switch and diode faults) and thermistor faults. At610, the method calculates the absolute value of health indicator for the diode and the power switch for each phase leg. At614, the method compares the absolute values to the corresponding thresholds. In some examples, Diff(T_t1, T_j,PS) is compared with Th1, where Diff( ) is a function corresponding to the difference modules inFIGS.6A and6B. Likewise, Diff(T_t2, T_j, PS) is compared to Th2, Diff(T_t3,T_j, PS) is compared to Th3, Diff(T_ntc1, T_j,diode) is compared to Th4, Diff(T_ntc2, T_j, diode) is compared to Th5, and Diff(T_ntc3,T_j, diode) is compared to Th6. At618, the method determines whether the health indicator values are greater than all of the corresponding thresholds. If618is true, the method identifies a fault with the power inverter (eg. the power switches, the diodes or other component). If618is false, the method continues at626and determines whether Diff(T_t1, T_j,PS)>Th1 and Diff(T_ntc1, T_j,diode)>Th4 and the other differences are less than their corresponding thresholds. If626is true, the method declares a fault with the first thermistor at630. If626is false, the method continues at640and determines whether Diff(T_t2, T_j, PS)>Th2 and Diff(T_ntc2, T_j, diode)>Th5 and the other differences are less than their corresponding thresholds. If640is true, the method declares a fault with the second thermistor at644. If626is false, the method continues at660and determines whether Diff(T_t3, T_j, PS)>Th3 and Diff(T_ntc3, T_j, diode)>Th6 and the other differences are less than their corresponding thresholds. If660is true, the method declares a fault with the third thermistor at664. The method continues from622,630,644,660(if false), and664at610. As can be appreciated, the diagnostic system can differentiate between temperature sensor faults or power inverter faults at least in part based on the estimated power switch temperature standing alone, the estimated diode temperature standing alone or based on both values (as described above). In the case where the diagnostic system differentiates between temperature sensor faults or power inverter faults in part based on the estimated power switch temperature standing alone, a power inverter fault occurs when all of the health indicators (3 in this case rather than 6) exceed the corresponding thresholds and temperature sensors faults occur when individual ones of the health indicators exceed the corresponding thresholds. A similar approach is used when diagnosis is based on the estimated diode temperature standing alone. The foregoing description is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses. The broad teachings of the disclosure can be implemented in a variety of forms. Therefore, while this disclosure includes particular examples, the true scope of the disclosure should not be so limited since other modifications will become apparent upon a study of the drawings, the specification, and the following claims. It should be understood that one or more steps within a method may be executed in different order (or concurrently) without altering the principles of the present disclosure. Further, although each of the embodiments is described above as having certain features, any one or more of those features described with respect to any embodiment of the disclosure can be implemented in and/or combined with features of any of the other embodiments, even if that combination is not explicitly described. In other words, the described embodiments are not mutually exclusive, and permutations of one or more embodiments with one another remain within the scope of this disclosure. Spatial and functional relationships between elements (for example, between modules, circuit elements, semiconductor layers, etc.) are described using various terms, including “connected,” “engaged,” “coupled,” “adjacent,” “next to,” “on top of,” “above,” “below,” and “disposed.” Unless explicitly described as being “direct,” when a relationship between first and second elements is described in the above disclosure, that relationship can be a direct relationship where no other intervening elements are present between the first and second elements, but can also be an indirect relationship where one or more intervening elements are present (either spatially or functionally) between the first and second elements. As used herein, the phrase at least one of A, B, and C should be construed to mean a logical (A OR B OR C), using a non-exclusive logical OR, and should not be construed to mean “at least one of A, at least one of B, and at least one of C.” In the figures, the direction of an arrow, as indicated by the arrowhead, generally demonstrates the flow of information (such as data or instructions) that is of interest to the illustration. For example, when element A and element B exchange a variety of information but information transmitted from element A to element B is relevant to the illustration, the arrow may point from element A to element B. This unidirectional arrow does not imply that no other information is transmitted from element B to element A. Further, for information sent from element A to element B, element B may send requests for, or receipt acknowledgements of, the information to element A. In this application, including the definitions below, the term “module” or the term “controller” may be replaced with the term “circuit.” The term “module” may refer to, be part of, or include: an Application Specific Integrated Circuit (ASIC); a digital, analog, or mixed analog/digital discrete circuit; a digital, analog, or mixed analog/digital integrated circuit; a combinational logic circuit; a field programmable gate array (FPGA); a processor circuit (shared, dedicated, or group) that executes code; a memory circuit (shared, dedicated, or group) that stores code executed by the processor circuit; other suitable hardware components that provide the described functionality; or a combination of some or all of the above, such as in a system-on-chip. The module may include one or more interface circuits. In some examples, the interface circuits may include wired or wireless interfaces that are connected to a local area network (LAN), the Internet, a wide area network (WAN), or combinations thereof. The functionality of any given module of the present disclosure may be distributed among multiple modules that are connected via interface circuits. For example, multiple modules may allow load balancing. In a further example, a server (also known as remote, or cloud) module may accomplish some functionality on behalf of a client module. The term code, as used above, may include software, firmware, and/or microcode, and may refer to programs, routines, functions, classes, data structures, and/or objects. The term shared processor circuit encompasses a single processor circuit that executes some or all code from multiple modules. The term group processor circuit encompasses a processor circuit that, in combination with additional processor circuits, executes some or all code from one or more modules. References to multiple processor circuits encompass multiple processor circuits on discrete dies, multiple processor circuits on a single die, multiple cores of a single processor circuit, multiple threads of a single processor circuit, or a combination of the above. The term shared memory circuit encompasses a single memory circuit that stores some or all code from multiple modules. The term group memory circuit encompasses a memory circuit that, in combination with additional memories, stores some or all code from one or more modules. The term memory circuit is a subset of the term computer-readable medium. The term computer-readable medium, as used herein, does not encompass transitory electrical or electromagnetic signals propagating through a medium (such as on a carrier wave); the term computer-readable medium may therefore be considered tangible and non-transitory. Non-limiting examples of a non-transitory, tangible computer-readable medium are nonvolatile memory circuits (such as a flash memory circuit, an erasable programmable read-only memory circuit, or a mask read-only memory circuit), volatile memory circuits (such as a static random access memory circuit or a dynamic random access memory circuit), magnetic storage media (such as an analog or digital magnetic tape or a hard disk drive), and optical storage media (such as a CD, a DVD, or a Blu-ray Disc). The apparatuses and methods described in this application may be partially or fully implemented by a special purpose computer created by configuring a general purpose computer to execute one or more particular functions embodied in computer programs. The functional blocks, flowchart components, and other elements described above serve as software specifications, which can be translated into the computer programs by the routine work of a skilled technician or programmer. The computer programs include processor-executable instructions that are stored on at least one non-transitory, tangible computer-readable medium. The computer programs may also include or rely on stored data. The computer programs may encompass a basic input/output system (BIOS) that interacts with hardware of the special purpose computer, device drivers that interact with particular devices of the special purpose computer, one or more operating systems, user applications, background services, background applications, etc. The computer programs may include: (i) descriptive text to be parsed, such as HTML (hypertext markup language), XML (extensible markup language), or JSON (JavaScript Object Notation) (ii) assembly code, (iii) object code generated from source code by a compiler, (iv) source code for execution by an interpreter, (v) source code for compilation and execution by a just-in-time compiler, etc. As examples only, source code may be written using syntax from languages including C, C++, C #, Objective-C, Swift, Haskell, Go, SQL, R, Lisp, Java®, Fortran, Perl, Pascal, Curl, OCaml, Javascript®, HTML5 (Hypertext Markup Language 5th revision), Ada, ASP (Active Server Pages), PHP (PHP: Hypertext Preprocessor), Scala, Eiffel, Smalltalk, Erlang, Ruby, Flash®, Visual Basic®, Lua, MATLAB, SIMULINK, and Python®.
20,284
11860240
DETAILED DESCRIPTION Set forth below is a description of what is currently believed to be the preferred embodiments or best representative examples of the inventions claimed. Future and present representative or modifications to the embodiments and preferred embodiments are contemplated. Any alterations or modifications which make insubstantial changes in function, purpose, structure or result are intended to be covered by the claims of this patent. The present inventions may be used on and/or part of electric grills with a digital power supply as discussed in the co-pending patent application entitled “Digital Power Supply” filed by Applicants and having application Ser. No. 15/200,759, and also the co-pending patent application entitled “Digital Power Supply with Wireless Monitoring and Control,” filed on the same day as this application, both of which are assigned to Weber-Stephen Products LLC, and which are both incorporated herein by reference in their entirety. The use of electric heating elements103,104in harsh or outdoor environments creates a need for protection circuitry100that protects against dangerous current scenarios resulting from the potential failure or misuse of components in an electric grill510. The environmental conditions—including sun, rain, wind, cleaning agents, food stuffs, and the like—may degrade electrical components and lead to short circuits, leaking current, or other dangerous conditions. In some instances, components may be permanently degraded. In other instances, degraded components, such as heating elements103,104, may return to normal condition if they are cleaned or re-installed. In both instances, there is a need to restrict the flow of current to protect the user. Protection circuitry100may protect against various failure scenarios, including, without limitation, instances of ground fault; overcurrent; driver failure; and failure of the microprocessor113. For example, a ground fault (or unbalanced current) occurs when the current drawn by a device such as electric grill510does not match the current returned by the device to the wall outlet. Often times, this indicates a current leakage. Leaking current creates a hazard to a user, especially if the current reaches the electric grill's housing506. In that case, the user may be shocked. In another failure scenario, degraded components may cause the electric grill510to draw an unsafe current load, leading to a so-called “overcurrent.” That may result in component damage and eventually lead to leaking current. In yet another failure scenario, a heating element103,104may receive a current load that is not necessarily unsafe, but is inconsistent with the heating element's operating mode. This inconsistency suggests a driver failure, which in turn may lead to unsafe conditions. A further failure scenario involves the failure of the microprocessor113. Because the microprocessor113controls the current delivered to the heating element(s), its failure could potentially lead to unpredictable current loads. Aspects of the present invention are designed to disable current in the event one or more failure scenarios (including those identified above) are recognized. FIGS.1-10show preferred embodiments of an electric grill510and a preferred protection circuitry100. By way of example,FIGS.1A and1Bshow a representative electric grill and some of its major components.FIG.1Ashows a preferred exterior of electric grill510, including a housing and lid506, onto which left and right control knobs501and502, as well as display503, may be mounted. The electric grill510includes a power cord507for connecting to an AC wall outlet. Left and right control knobs501and502, and display503, connect to a microcontroller113which is described in greater detail herein. A reset button511may also be provided for use as hereinafter described. As shown inFIG.1B, left and right control knobs501and502may be associated with a first and second heating element,103and104, respectively, thus creating dual cooking zones. A representative grate or cooking surface512is also shown inFIG.1B. Each heating element103,104may be controlled independently by a knob501,502or other controller or user input associated with the heating element103,104. Left knob501and right knob502may be positioned on the exterior of a grill housing506. The knobs501and502, or any other input device that will be understood by those of skill in the art, may be connected to a microprocessor113to set the operating mode of one or more heating elements103,104. AlthoughFIGS.1A and1Bshow two knobs501,502controlling two heating elements103,104, it should be understood that protection circuitry100may be used with any combination of user input devices and heating elements, as will be understood by those of skill in the art. Using knobs501and502, or any other input device, a user typically selects an operating mode for one or both heating elements103and104. The operating mode may include a desired temperature setting. Microprocessor113, described in further detail herein, controls the electric current delivered to heating elements103and104in order to achieve the desired temperature setting. Microprocessor113can achieve a desired temperature for each heating element103and104using a feedback loop in which it receives a current or real time temperature reading from thermocouples121and122, which may be proximally positioned by respective heating elements103and104. It should be understood that, although thermocouples are shown as an example, any known temperature sensing device may be used. A person of ordinary skill in the art would recognize that various types and numbers of knobs, touch-pad, heating elements, temperature sensors and/or displays may be used. The electric grill510preferably includes a display503and/or other user interface. The display503may be connected to microprocessor113and display information relating to the current settings or operation of one or more of the heating elements103,104. For example, the display503may show the current temperature of heating elements103and104(as measured by thermocouples121and122), as well as the desired temperature a user has selected via knobs501and/or502. A preferred embodiment of protection circuitry100is shown inFIGS.2and7, where perforated lines represent control/data lines while solid lines represent power lines. In general, non-limiting terms,FIG.2shows hardware components and a specially configured microprocessor that can detect various failure conditions and respond by disabling the flow of current to the electric grill510. Protection circuitry100includes a current transformer105for measuring a potential difference, if any, between current drawn by the device and current returned from the device. A ground fault detection unit117is provided to evaluate the difference, if any, and activate a trip controller118, which would cause a latch relay106and/or107to create an open circuit and thus stop the flow of current. Moreover, a microprocessor113receives current readings from a Hall Effect sensor119and may use those current readings to detect various types of dangerous conditions. If a dangerous condition is detected, microprocessor113may activate the trip controller118to create an open circuit, or disable triac drivers111and/or112in order to prevent current from flowing to heating elements103and/or104. A watchdog monitor may optionally be provided to communicate with microprocessor113and to disable triacs108and/or109in the event microprocessor113is not communicating normally. Line101and neutral102may draw alternating current (AC) from a typical wall outlet. A traditional power cord507may be used to plug line101and neutral102into an AC wall outlet using typical fixtures. Line101and neutral102also connect to a set of one or more AC/DC power converters114which supply the basic power needs of various components including display(s) and/or microprocessor(s). The power converters114convert the alternating current to direct current having lines of 3.3 Volts DC, 5 Volts DC, and 15 Volts DC. These DC lines may be used to power various components on the electric grill, such as one or more displays, microprocessor(s), etc. A person of ordinary skill would recognize that the AC/DC power converters114can be used to supply any level of DC voltage required by any of the electric grill's components. Line101and neutral102further connect to current transformer105, which measures the difference, if any, between current going to heating elements103and/or104from line101, and current returning to neutral102. A potential difference in current, if any, is signaled to ground fault detection unit117, which evaluates the difference in current to determine if current is leaking. In other words, if damage to the circuit (whether temporary or permanent) has caused electric current to leak from any of the components, then the current returning through neutral102will be less than the current drawn in line101. Ground fault detection unit117detects that there is electric current missing. Missing current is indicative of a dangerous operating condition because it may come in contact with the user, causing an electric shock, or cause other components to fail. In such a scenario, a desired response is to stop the flow of any current in order to avoid the risk of shock, electrocution, or component damage. To cause current to stop flowing, ground fault detection unit117activates a trip controller118, which in turn opens electro-mechanical latches106and107. As shown inFIG.2, latches106and107are positioned in series with heating elements103and104; thus, tripping a latch causes an open circuit, which, by definition, stops the flow of current. Latch relays106and107may be electro-mechanical switches for creating an open circuit and may be connected via a control line to trip controller118. When tripped, latch relays106and107may remain open until a user engages a mechanical switch. As one example, a reset button511or other mechanical switch on the housing506may be associated with the latch relays106and107to reset them into a closed position after they have been tripped. An exemplary embodiment of ground fault detection unit117interacting with latch relays106and107is best shown inFIG.4. As a non-limiting example, ground fault detection unit117may be a ground fault interrupter such as part number FAN4146ESX, made by Fairchild Semiconductor. The current transformer105is positioned to measure the current difference, which is read by ground fault detection unit117. Ground fault detection unit117generates a trip control signal401if the current difference exceeds a safety threshold, in which case trip control signal401is fed back to latch relays106and107, creating an open circuit and stopping the flow of current. A user turning on a device in which current is leaking will be protected because the tripping of latch relays106and107will cause an open circuit, thereby minimizing the risk of electric shock to the user or further damage to the equipment. A person of skill in the art would recognize that a certain tolerance in current difference may be allowable. Again, by reference toFIG.2, a step-down transformer115is provided because ground fault detection unit117operates at a lower voltage than that drawn from line101and neutral102. Line101and neutral102are connected to step-down transformer115, which provides a lower secondary voltage through a full wave rectifier116to ground fault detection unit117and also to a trip controller118. The step down transformer115has the benefit of isolating the ground fault detection unit117and trip controller118from the high voltage of line101and neutral102. Instead, they operate at the lower secondary voltage. A person of skill in the art would recognize that step-down transformers are used to isolate components operating at a lower voltage. Step down transformer115has the additional benefit of separating ground fault detection unit117from microprocessor113, which provides added protection in the event that microprocessor113fails during a ground fault/unbalanced current. Microprocessor113's failure would not prevent ground fault detection unit117from recognizing a ground fault/unbalanced current. Likewise, a failure of ground fault detection unit117would not prevent microprocessor113from continuing to monitor current conditions. During normal operation, microprocessor113controls the heat and temperature setting by controlling the flow of electricity to heating elements103and104. Microprocessor113may also be configured to detect and respond to abnormal operating conditions, i.e. conditions having an increased risk of electrocution, shock or component damage. A discussion of microprocessor113's functionality during normal operating conditions is provided, followed by specific configurations that allow microprocessor113to detect and respond to failure conditions. During normal operating conditions, microprocessor113controls the electricity (and thus, the heat and temperature) to heating elements103and104from line101and neutral102. The electric path runs through line101and neutral102, which are connected through current transformer105, and further through a series of latch relays106and107and triacs108and109. As will be understood, triacs are three electrode devices, or triodes, that conduct alternating current. Triacs are a type of solid state bidirectional switch. The protection circuit100disclosed herein describes the use of triacs to control current flowing to heating elements103and104, however it will be understood that other solid state bidirectional switches may be used in place of a triacs consistent with the present inventions. Heating elements103and104may be resistive heaters which increase in temperature as more current passes through them. Other types of heating elements103,104may also be used as will be understood by those of skill in the art. Triac drivers111and112control triacs108and109by “opening” and “closing” them to allow or prevent current from passing to heating elements103and104. A person of ordinary skill in the art would recognize that triac drivers are used to control a high voltage triac with a low voltage DC source (such as a microprocessor) (FIG.2). Moreover, triac drivers111,112are used to isolate devices from a potentially high current or voltage in a triac. Triac drivers111and112interface between microprocessor113and triacs108and109while at the same time keeping microprocessor113isolated from voltages and currents in triacs108and109. In order to achieve a user's desired temperature during normal operation, microprocessor113controls current delivered to the heating elements103and104by activating (or deactivating) triacs108and109via their triac drivers111,112. In other words, microprocessor113controls the current drawn, and thus the temperature, of heating elements103and104by controlling the triac drivers111and112. A disabled triac108and/or109creates an open circuit through which no current can flow. To recognize when a desired temperature has been achieved, microprocessor113may receive temperature feedback from one or more thermocouples121and122located proximately to each heating element103and104, or elsewhere throughout the cook box.FIG.1Bshows a representative example of thermocouples121and122adjacent to each heating element103and104. The feedback is used by microprocessor113to adjust the current delivered to the heating elements103,104until the desired temperatures selected by knobs501and/or502is achieved. As a result, a user can select a desired operating mode (independently) for heating elements103and104and microprocessor113will control the current delivered until a desired temperature setting is reached. FIG.5shows exemplary inputs and outputs to and from microprocessor113, which can use the feedback from the thermocouple121and/or122to adjust current flowing to a heating element103and/or104until a desired temperature is reached. The desired temperature may be selected by a user through a user interface, such as knobs501or502, and communicated electronically to microprocessor113. A person of ordinary skill in the art would know understand that the microprocessor113may include and communicate with an internal or external memory508containing the software instructions for executing the calculations and comparisons, as well as other settings described herein. As an optional input example, microprocessor113may receive a control signal from a zero crossing detection unit110(FIG.2). The zero crossing detection unit110sends a control signal each time the alternating current, as measured through step down transformer115, crosses zero. Using this signal, microprocessor113can identify the present status of an alternating current's wave form. Tracking the zero crossings enables microprocessor113to turn triacs108and109on and off in a manner that reduces the harmonics introduced. Microprocessor113may be configured to identify dangerous conditions that arise during normal operation. Although ground fault detection unit117detects a leaking current, there are other dangerous conditions that microprocessor113is specifically configured to detect and respond to. As seen inFIG.2, microprocessor113is in communication with trip controller118and triac drivers111and112, thus giving microprocessor113two different ways to stop a flow of current—by tripping a latch106or107, or by disabling triacs108and/or109if it detects a failure condition. For example,FIG.3shows that heating elements103and104are in series with triacs108,109and with latches106,107. As a practical matter, opening one of the latches106,107or both of the triacs108,109will stop the flow of all current. As one example, microprocessor113may be configured to respond to an “overcurrent” scenario. Overcurrent conditions are dangerous because they are associated with an increased risk of component failure and/or damage to electronic circuitry, which in turn may be a precursor to current leakage. An overcurrent scenario occurs when a circuit draws more current than it is safely rated to handle. An overcurrent may occur if a harsh environment causes the resistance value of some components, such as heating elements, to change, resulting in a higher current draw. However, an overcurrent scenario does not necessarily correlate to a mismatch in current. Therefore, ground fault detection unit117may not detect an overcurrent and it may be desirable to configure microprocessor113to recognize it. To that end, a Hall Effect Sensor119sends microprocessor113a current reading indicative of the current flowing through triacs108and109. A Hall Effect sensor119measures the current being delivered through one or more of the triacs and to heating elements103and104. The protection circuitry described herein discloses a Hall Effect sensor119that is used to measure current, but a person of skill in the art would recognize that any suitable current sensor may be used in place of Hall Effect sensor119. The Hall Effect sensor119is connected to microprocessor113via a control line to convey to microprocessor113how much current is being delivered through the heaters103,104. The Hall Effect sensor119measures the current delivered to heating elements103and104and sends a current measurement to microprocessor113via a control/data line. The Hall Effect sensor119may be configured to measure the current through the voltage line101, or to measure both of the two currents going to the individual heating elements103and104. In either configuration, the current reading is communicated to the microprocessor113.FIGS.2and5show a connection between microprocessor113and Hall Effect sensor119.FIG.6shows microprocessor113sending a trip control signal if it detects an overcurrent condition. InFIG.2, Hall Effect Sensor119is shown to measure the combined current in the power line leading to triac108and109. A person of ordinary skill in the art would recognize that a possible alternative configuration would be to connect one Hall Effect sensor to the node of each triac, thereby measuring the current to each individual triac rather than the combined current. To recognize an overcurrent condition, microprocessor113compares the current reading from Hall Effect Sensor119with a predetermined threshold current level at which the circuit may safely operate. The predetermined threshold is the threshold for an overcurrent condition. The predetermined threshold current level may be chosen based on any number of considerations, including the maximum current at which the heating element103,104may operate, or the maximum current at which any of the other components in the circuit may operate. Microprocessor113compares the current measured by Hall Effect sensor119to the predetermined threshold current level. If the current exceeds the threshold, there exists a potential overcurrent condition and the flow of current should be stopped. To stop the flow of current, microprocessor113sends a trip control signal505to trip controller118, which is connected via control/data line. Trip controller118responds by tripping latch relays106and107, causing an open circuit with respect to the heating elements and thereby stopping the flow of current. Exemplary inputs from the Hall Effect sensor119to microprocessor113, and the trip control signal505from microprocessor113, are shown inFIG.5. In some embodiments, microprocessor113may additionally be configured to recognize when heating elements103and104draw a current that is within a safe range, but which is different from the current expected to be drawn given a heating element's selected operating mode. For example, a potentially dangerous scenario may occur when a heating element is set to a “LOW” temperature but drawing current reserved for a “HIGH” temperature, or vice versa. If a user has set a heating element103and/or104to a high temperature, but only a low current is being delivered, it is likely a component has failed. Possible causes of such a scenario include, without limitation, a harsh or caustic environment corroding Hall Effect sensor119or a failure of triacs108,109or triac drivers111,112. Microprocessor113may use a feedback loop from thermocouples121and122to deliver current to a heating element103and/or104until a desired temperature is achieved. The desired temperature may then be maintained at a steady state. A person of ordinary skill would recognize that raising the temperature of a heating element103or104draws more current than maintaining the temperature. By way of example, if a user activates electric grill510and selects a “HIGH” temperature, microprocessor113must deliver a high current to the relevant heating element103and/or104until a “HIGH” temperature has been achieved. Once microprocessor113recognizes that the desired “HIGH” temperature has been achieved (for example via feedback from thermocouples121and122), microprocessor113can reduce the current delivered in order to maintain the temperature at a steady state. Examples of how the heating elements may operate include discrete modes, such as “HIGH,” “MEDIUM,” “LOW,” or on a continuous spectrum measured for example in % or by a temperature. Since a higher current results in a heating element having higher temperature, a person of skill in the art would recognize that raising the temperature of heating elements103and104would draw more current than maintaining a steady state temperature. To identify an unexpected current condition, microprocessor113is configured to compare a current reading from Hall Effect sensor119with an expected current. The current which microprocessor113is configured to deliver to the heating elements in any given mode (accounting for whether microprocessor113is raising a temperature or maintaining a steady state) is the “expected current” because it is expected to match the reading from Hall Effect sensor119during normal operating conditions. In other words, during normal operating conditions, the current reading from Hall Effect sensor119is expected to match the expected current, i.e. the current microprocessor113is programmed to deliver. If the current reading from Hall Effect sensor119does not match the expected current, it is likely that a driver failure has occurred. The expected current value may be accessible to microprocessor113through internal or external memory508. In this way, microprocessor113is programmed to recognize the total amount of current that should be drawn by a normally-functioning heating element or elements in any given operating mode (or combinations of operating modes). Should a failure condition arise, microprocessor113responds by disabling triac drivers111and112, thereby opening the respective triacs and cutting current through the heating elements103and/or104. In one embodiment, microprocessor113may optionally be programmed to re-enable the flow of current after a predetermined amount of time has passed, and to continue monitoring the current drawn. Re-enabling the flow of current may be desirable because the cause of the failure may have been temporary. By way of non-limiting example, a temporary failure condition that quickly stabilizes may be detected if the electric grill510was recently turned on/off, or if a temporary irregularity occurred in the power grid. FIG.6is a flow chart showing microprocessor113determining an expected current based on the electric grill's510operating mode, and comparing the expected current to an actual current reading received from the Hall Effect sensor119. If a mismatch is detected, triac drivers111and112are disabled. Moreover,FIG.6also shows microprocessor113comparing a current reading from the Hall Effect119sensor to an overcurrent threshold, and responding to an overcurrent condition by sending trip control signal505. A person of ordinary skill in the art would recognize that these steps and comparisons could be performed in any order and in a number of different implementations, all of which are contemplated by the present inventions. Microprocessor113may repeat these operations on any desired or periodic basis. In yet another failure example, protection circuit100protects against a failure of microprocessor113. Because microprocessor113controls current delivered to heating elements103and104, its failure could lead to unpredictable results that may include unsafe levels of current. To protect against a failure of microprocessor113, the circuit100may include a watchdog monitor120connected between microprocessor113and triacs108and109as shown inFIG.2. In this situation, microprocessor113sends a watchdog monitor signal504to watchdog monitor120which confirms that microprocessor113is operating normally. Watchdog monitor120is configured to look for a signal from microprocessor113confirming its normal operation. Watchdog monitor120is also connected to triacs108and109. In the absence of a signal from microprocessor113confirming normal operation, watchdog monitor120disables the triacs108and109, thus preventing current from flowing to them. If microprocessor113subsequently returns to normal operation, watchdog monitor120may re-enable the flow of current. This configuration of watchdog monitor120allows the possibility that microprocessor113may return to normal operation after a period of malfunction or resetting. This is advantageous because it allows for continued operation even in scenarios where the microprocessor113is booting or rebooting. In other words, if the microprocessor113is in the process of rebooting (intentionally, or unintentionally), watchdog monitor120may determine that microprocessor113is not operating normally and disable the flow of current. But normal operation may resume once microprocessor113completes its boot sequence and resumes sending its signal to watchdog monitor120. FIG.7shows additional embodiments of the inventions. For example, shown inFIG.7is an embodiment in which zero crossing detection unit110is connected directly to line101and neutral102, without any intermediary transformer. Also shown is an embodiment in which ground fault detection unit117is connected to power (in this case, 12 V, but other voltages are also contemplated) through the AC/DC power converters114. Zero crossing detection unit110and ground fault detection unit117may perform the functions described herein when configured as shown inFIG.2,FIG.7, or any other number of configurations. FIG.7further discloses relays701and702, which are configured in parallel with triacs108and109, respectively. Relays701and702are controlled via control line (not shown) by microprocessor113for controlling the delivery of current to heating elements103and104, respectively. Because of the parallel configuration between relays701,702and triacs108,109, current can be delivered to the heating elements103,104by activating either a relay or a triac. Stated another way, microprocessor113can use either the respective triac108,109or the respective relay701,702to deliver current to heating elements103,104. An advantage of having two components (a relay and a triac) which can each deliver current to the heating elements103,104, is that microprocessor113can alternate between the two components to reduce heat generation. For example, delivering 100% power to heating elements103,104may cause triacs108,109to overheat when active. More specifically, heating elements103,104may draw a relatively high amount of current when a high temperature is desired, and delivering said current through triacs108,109for a prolonged period of time may cause triacs108,109to overheat and eventually deteriorate. To avoid this, microprocessor113may deactivate triacs108,109and instead activate relays701,702when delivering a “HIGH,” or relatively high current to heating elements103,104. Current then flows to heating elements103and/or104through relays701and/or702, respectively, thereby protecting triacs108,109from overheating. FIG.7further shows an embodiment of microprocessor113having the functionality of band controller703. A person of skill in the art, having the benefit of this disclosure, would understand that band controller703may be a physical and/or virtual subcomponent of microprocessor113, or may alternatively be a separate hardware and/or software component. In embodiments of the inventions, band controller703may be configured to receive a target temperature via a user input (including wireless inputs), and to control the amount of power (i.e., current) delivered to heating elements103,104to achieve the user-selected target temperature. Band controller703may use hardware and software applications to achieve and maintain target temperatures at heating elements103,104by controlling the amount of current delivered. Band controller703may receive feedback from thermocouples121,122, which may be positioned proximate to heating elements103,104, and use such feedback to determine when a target temperature has been achieved. In embodiments of the inventions, it may be desirable to estimate the ambient temperature within the grill's cook box using thermocouples121,122. There are scenarios in which the ambient temperature (e.g., the temperature at a position of six or eight inches above the heating elements) may not be identical to the temperature at heating elements103,104, especially when operating at higher temperatures. Because food may be positioned throughout a grill's cook box, for example on a grate positioned a few inches above heating elements103,104, it may be desirable for band controller703(and microprocessor113) to operate based on an estimated ambient temperature, rather than the temperature at heating elements103,104. Operating based on the ambient temperature provides a more precise measurement of a food's temperature, and therefore a more precise measurement of a food's doneness. By way of example,FIG.10shows Applicants' test data for accurately estimating the ambient temperature1001, based on the temperature1002at thermocouples121,122. On its x-axis,FIG.10shows a temperature1002measured at thermocouples121,122. On its y-axis,FIG.10shows a corresponding estimated ambient temperature1001. The curve1003shows the estimated ambient temperature (y-axis) as a function of the measured temperature (x-axis). The estimated ambient temperature ofFIG.10was measured a few inches above a heating element, at a position where a user may configure a cooking grate. It becomes clear that, at higher temperatures, the ambient temperature diverges from the measured temperature at the thermocouples—in other words, at higher temperatures, the estimated ambient temperature at a position above a heating element rises faster than the temperature of the heating element. By way of example, at reference point1004, the estimated ambient temperature and the temperature at the thermocouples1002are both roughly equal, at 150 F. At a higher temperature (e.g. reference point1005), the temperature at the thermocouple may be 300 F, whereas the estimated ambient temperature has risen to approximately 400 F. Thus, at higher temperatures, a higher offset is required in order to accurately estimate the ambient temperature. Using the offsets indicated byFIG.10, microprocessor113, and/or band controller703, may be adapted and configured with hardware and/or software to calculate an estimated ambient temperature based on a measured temperature at thermocouples121,122. It should be understood that the offsets ofFIG.10are provided as an example only, and may be increased or decreased depending on factors such as the height of a cooking grate, and other factors which may affect ambient conditions. Moreover, microprocessor113and/or band controller703may use such an estimated ambient temperature as part of a feedback loop to determine when a target temperature is reached. In other words, in some embodiments, a target temperature may refer to the estimated ambient temperature, and in other embodiments it may refer to the temperature at thermocouples121,122. It is contemplated that yet further embodiments may use a food probe (not shown) to measure a food's temperature and determine when a target temperature is reached based on a temperature reading from the food probe. A food probe is a temperature sensing device which may be inserted by a user into a food—such as a steak or a chicken breast—to measure the food's internal temperature. Using a food probe to sense temperature may be advantageous to some cooking styles because it can provide an accurate determination of a food's internal temperature, and by extension its doneness. To consistently maintain a target temperature, band controller703may determine temperature “bands” surrounding a given target temperature, where said bands indicate the amount of power (i.e., current) to deliver to a heating element103,104as a target temperature is approached. In embodiments of the inventions, the bands create zones representing 0%, 50%, and 100% power. The zone above801represents a temperature zone in which 0% power is delivered; the zone between801and803represents a zone in which 50% power is delivered, and the zone below803represents 100% power delivery. Band controller703uses the bands to determine an appropriate power (e.g., electric current) to deliver to a heating element to achieve and maintain the desired target temperature. By way of example, seen for example inFIG.8A, band controller703may deliver 100% power until a desired target temperature802is achieved, and then reduce power to 50% until an upper band801is reached. If the upper band801is exceeded, band controller703reduces power to 0%. If the temperature drops to (or below) a lower band803, power is again increased to 100%. Band controller703continuously receives feedback from thermocouples121,122, and compares the feedback (in some embodiments, the estimated ambient temperature described above) to appropriate temperature bands. In this way, temperature fluctuates between lower band803and upper band801, and approximates the target temperature. Moreover, in embodiments of the invention, band controller703dynamically shifts the bands depending on the desired target temperature. Dynamically shifting the temperature bands allows for more precise temperature control, allowing a user to approximately maintain the selected target temperature. This occurs because, at lower temperatures, a 50% power setting may cause the electric grills temperature to continue increasing past the desired target temperature. On the other hand, at higher temperatures, delivering 50% power may cause the temperature to begin dropping below the desired target temperature. Therefore, band controller703may compensate by lowering the bands for a lower desired target temperature. On the other hand, at a higher temperature range, band controller703may shift the bands higher. An example of lowered temperature bands corresponding to a lower desired target temperature is shown inFIG.8B. InFIG.8B, a lower target temperature has been selected, and band controller703shifted the upper band (801) to correspond to the target temperature. Conversely,FIG.8Cshows a relatively high target temperature, for which band controller703raised the power bands such that the target temperature802coincides with the lower band (803). InFIG.8B, the target temperature overlaps with the power band801; whereas inFIG.8Cthe target temperature802overlaps with the power band803. Exemplary values for power bands are provided in the following table: Desired targetLower temperatureUpper temperaturetemperature (T)band (100%)band (0%)T < 250 F.T − 25 F.T250 F. < T < 400 F.T − 10 F.T + 10 F.400 F. < TTT + 15 F. In embodiments having multiple heating elements capable of independent operation, users can input multiple target temperatures. For example, an embodiment having two independent heating elements103,104, may receive two separate target temperatures, each corresponding to one heating element. Target temperatures may be communicated to band controller703through any number of possible user inputs. By way of non-limiting examples, possible user inputs include knobs501,502. User inputs can also be received wirelessly, via wireless controller704, from a wireless device configured to communicate with wireless controller704. In such an embodiment, wireless controller704may be configured to wirelessly communicate with a remote device via Wi-Fi, Bluetooth, radio frequency, or any other form of wireless communication. Remote devices include cell phones, tablets, laptops, computers, and any other form of device capable of wireless communication.FIG.9shows an exemplary remote device901, having a display902and user input device903, communicating with the electric grill510's wireless controller704. In a non-limiting example, remote device901may be a cell phone with a touch screen as its input device903. Regardless of the type of device used, it is contemplated that remote device901may be configured to receive a user input representing, among other things, one or more target temperatures, and wirelessly communicate said target temperature to electric grill510via wireless controller704. In exemplary embodiments, remote device901may be adapted and configured to directly receive a desired target temperature from a user. In such embodiments, a user can use input device903to select a target temperature. In other exemplary embodiments, remote device901may be adapted and configured to receive a user input selecting a type of meat to be cooked, and a desired doneness, and to determine the appropriate target temperature for the user's selection. In such embodiments, remote device901may have a memory904storing the appropriate target temperature associated with a desired food profile. A user thus uses input device903to select a food profile, and remote device901wirelessly communicates the associated target temperature. In addition to controlling target temperatures, embodiments of remote device901are adapted and configured to send an “on” and/or “off” signal wirelessly, via wireless controller704, to microprocessor113and/or band controller703. As such, a user can control both the desired target temperature of the electric grill510, as well as turning it on and off. Additional examples of wireless communication between remote device901and electric grill510(via wireless controller704) include the ability to control settings for display503remotely, from remote device901. Thus, remote device901may be adapted and configured to wirelessly control the information displayed on electric grill510's display503. Remote device901may control which information is displayed on display503, and allow a user to toggle between (C) Celsius and (F) Fahrenheit with respect to temperature measurements. Such information may include the electric grill510's current temperature, ambient temperature, target temperature, as well as timers indicating how long the grill has been active, how long a food has been cooking, or how much time remains until a food reaches its target temperature. Such information may further be wirelessly transmitted from electric grill510, via wireless controller704, to remote device901. In turn, remote device901may provide such information to a user on a remote device display902, and may further use said information to wirelessly turn electric grill510off, or reduce its desired target temperature, if a predetermined temperature has been reached, or if a food has been cooking for a predetermined time period. In exemplary embodiments, food profiles are stored in memory904, where such food profiles indicate either the appropriate target temperature and/or an appropriate cooking time for a given food. Remote device901may monitor information received wirelessly from electric grill510and determine if an appropriate temperature or cooking time has been reached. Remote device901may also be adapted and configured to turn off electric grill510once that happens, and/or to provide an audible or visual alert. Such an audible and/or visual alert may be provided on the remote device901, at the electric grill510, or both. Moreover, it is contemplated that embodiments of the inventions may use wireless communications to deliver error codes from the electric grill510to a remote device901, where said error codes may be indicative of an unsafe current condition as described further herein. Delivering error codes to a remote device901has the advantage of allowing a user to remotely understand when an unsafe current condition has occurred, and remote device901may further display safety tips for correcting the unsafe current condition as well as recording the conditions that lead to the unsafe condition. Error codes may be determined by microprocessor113acting in conjunction with the protection circuitry100. As described further herein, microprocessor113may be in communication, via control lines, with Ground Fault Detection Unit117and Hall Effect sensor119. Thus, microprocessor113may be adapted and configured to receive a control signal from Ground Fault Detection Unit117indicating that a ground fault has been detected. Likewise, microprocessor113may be adapted and configured to use signals from Hall Effect sensor119to recognize errors in delivering current to heating elements103and104. As described further herein, a reading of zero current from Hall Effect sensor119indicates that heating elements103and104are not receiving any current, whereas an unexpectedly high current reading indicates that too much current is flowing to heating elements103and104(e.g., an “over-current” scenario). In embodiments of the inventions, microprocessor113is adapted and configured to recognize these errors and wirelessly communicate, via wireless controller704, an error code corresponding to the error which occurred. For example, an error code of “01” may indicate that a ground fault has been detected; “02” may indicate that Hall Effect sensor119has determined that no current (or an unexpected current) is flowing to heating elements103and/or104; and “03” may indicate that Hall Effect sensor119detected an unexpectedly high current flowing to heating elements103and/or104. In embodiments where microprocessor113is a chip including a “self-check” feature, an error code of “04” may be sent if the self-check pin determines a failure of microprocessor113. A person of ordinary skill in the art would recognize that any variety of codes may be used to indicate each error. In response to an error, an audible or visual alert may be signaled at electric grill510, including for example on display503. Likewise, remote device901may also provide an audible or visual alert upon receiving an error code. Remote device901may be adapted and configured to wirelessly receive error codes and display, on display902, a message identifying the type of error to the user. Such an error message may be accompanied by an audible or visual alert at remote device901. Remote device901may further be adapted and configured to display a message, saved in memory904, explaining steps that a user should take to correct the error. For example, as explained further herein, protection circuitry100may be configured to trip a relay106and/or107in response to a ground fault. Therefore, if microprocessor113sends an error code (e.g., “01”) indicating a ground fault to remote device901, remote device901may display a message alerting a user that a ground fault has occurred and prompting the user to reset relay106and/or107. In response to an error “02,” remote device901may be adapted and configured to alert the user that no current is flowing to heating element103and/or104. The absence of current flowing may be indicative of an open circuit, which may occur, for example, if a heating element103,104is not properly installed. Thus, remote device901may display a message prompting a user to uninstall, and re-install, heating elements103,104. If the error persists, remote device801may prompt the user to contact the manufacturer. Similarly, if error code “03” is received, an over-current has occurred. One possible cause of an over-current may be that a user has installed an incompatible, or faulty, heating element having an incorrect resistance value. (A heating element with an incorrectly low resistance will cause an inappropriately high current to flow through it). For example, a heating element designed to work at 120V would have a resistance value that is too low to function at 230V, causing an overcurrent. Thus, a user may be prompted to check the heating element, or replace it with a new one. Remote device901and/or microprocessor113may create a log of errors and store the error log in a memory. Such an error log may include a recording of each error that occurred. Moreover, in embodiments where remote device901receives status information (such as the temperature of heating elements, ambient temperature, temperature targets, cooking time, etc.) from electric grill510, such status information may also be recorded in the error log. Status information may be delivered continuously, or in response to an error. By way of example, it may be advantageous to record how long a grill had been cooking before an error occurred, the grill's temperature at the time of an error, and other related information. An error log may be helpful in diagnosing errors. It should be understood that the error log may be created and/or stored on the remote device901, electric grill510(or microprocessor113), or both. A person of skill in the art would understand that a wide variety of parameters may be recorded and stored as part of an error log. In some embodiments, remote device901may have an internet connection905. Internet connection905allows remote device901to optionally send a recorded error log to a third party, such as an electric grill's manufacturer. A manufacturer can therefore better understand the error that occurred and the conditions surrounding the error. This can lead to product fixes and improvements. The present inventions also provide methods for reducing the risk of unsafe electric conditions during grilling. In a preferred embodiment, a user may use an electric grill510to deliver current to one or more electric heating elements103and/or104which may be connected to a voltage line101and a neutral line102through triacs108and109, and latch relays106and107. When heating element103or104is activated by the user, a current transformer105in the electric grill510's protection circuitry100measures a difference, if any, in the current drawn by electric grill510and the current returned from electric grill510. If a current difference is detected, methods of the present inventions generate an electric signal to activate a trip controller118connected to a latch relay106and/or107. Methods of the present inventions may additionally include using the electric grill510's protection circuitry100to measure current being delivered to a heating element103or104with a Hall Effect sensor119and conveying the measured current to a microprocessor113. By activating the electric grill510and its protection circuitry100, the microprocessor113compares the measured current to a predetermined current threshold. The predetermined current threshold may be dynamically selected based on the current operating mode selected by a user. If the measured current exceeds the predetermined threshold while the electric grill510is in use, the present inventions may include the step of disabling the flow of current by tripping a latch relay106and/or107, or disabling a triac108and/or109. In additional embodiments, signals indicative of normal operation from the microprocessor113to a watchdog monitor120are sent. In turn, watchdog monitor120may enable triacs108and/or109to permit the flow of electricity to heating elements103and/or104during normal operation, and disable the flow of electricity during a phase of abnormal operation. The devices and methods described above may be used to provide a safer electric grill experience. Various embodiments allow a user to activate a knob501and/or502(or other input means, such as wireless) to grill food using heat from heating elements103and/or104, which in turn are controlled by a microprocessor113. Display503may convey, among other things, the current temperature to the user to allow the user to decide when to put food onto a grate or how long to leave food cooking. A user may be using an electric grill510that has been exposed to harsh conditions for a prolonged period of time and which has electric components that may leak current. Embodiments of the invention provide a current transformer105which functions together with ground fault detection unit117and trip controller118to detect current leakage and, in response, trips latch relays106and107. Although grilling will be halted, the user will remain safe from the leaking current. A user may respond, for example, by removing and re-installing heating elements103,104, and pushing a reset button511or similar switch. Provided the current leakage has been resolved, normal operation can continue. During normal cooking, a heating element103,104or other component may become unintentionally loose, or may be damaged from heat or other environmental factors. A possible result is that electric grill510may draw an unsafe current, which is detected by microprocessor113via a signal from Hall Effect sensor119. The microprocessor113may respond by activating a trip controller119and thereby opening latches106and107. As described above, the result is a stoppage of current and the user may attempt to restart the electric grill510via reset button511. Similarly, an unsafe condition may lead heater103and/or104to draw an amount of current that differs from the amount expected based on the user settings of knobs501and/or502. In response, embodiments of the invention provide a microprocessor113which may disable triacs108/109(via their drivers) to stop the flow of current. A user may be alerted via display503, but latches106and107are not tripped in this case, so in this instance, the user may not have to reset the button511. Further, embodiments of the invention may include a watchdog monitor120which may be provided to monitor the correct operation of microprocessor113while electric grill510is in use by a user. Watchdog monitor120may disable triacs108/109if microprocessor113enters an abnormal operating state, including a possible reboot. A user does not have to reset the button511and may wait for microprocessor113to return to normal operation to resume grilling. The hardware and specifically configured microprocessor may be provided to a user to ensure a safe grilling experience. A person of skill in the art would recognize that electric grills having various combinations of the embodiments described above are possible, and not every feature must necessarily be included in each embodiment. Moreover, although the present inventions have particular applicability to grills for outdoor use, it will be understood by those of skill in the art that the present inventions may be used on a variety of grills or other devices, whether for indoor or outdoor use. The present inventions also include methods for using a remote device901, such as a cell phone or tablet, to communicate with an electric grill510. For example, a user may use a cell phone to wirelessly communicate with electric grill510and activate it. Moreover, a user uses a remote device user input903, such as a touch screen, to select a target desired target temperature. In embodiments of the invention, a user may select a desired cooking profile, and remote device901retrieves, from memory904, the associated temperature, which is communicated wirelessly to microprocessor113and/or band controller703. In response, microprocessor113, and band controller703, raise the power delivered to heating elements103,104, until a desired target temperature is achieved. Band controller703may be used to maintain a temperature within the range of predetermined bands. In this way, a user may use electric grill510to cook a food item as long as no error has occurred at electric grill510(by extension, at protection circuitry100). During normal operations, the user may wirelessly receive status information from electric grill510on remote device901, including various parameters concerning the temperature, time, and status of the grill. If an unsafe current condition occurs, microprocessor113may detect it, in accordance with the present disclosure, and send an error code to the user at the user's remote device901. An audible and/or visual alert may be provided at electric grill510and/or remote device901to alert the user that an unsafe current condition has occurred. Moreover, the user may be presented with a message explaining the type of error which has occurred and providing suggestions for how to fix the error. In embodiments of the invention, the user may opt to save an error log, which may contain the type of error that occurred as well as various information surrounding the grill's operating conditions at the time of the error. The error log may then be sent over the internet to a manufacturer for further diagnoses and repair information. The above description is not intended to limit the meaning of the words used in or the scope of the following claims that define the invention. Rather the descriptions and illustrations have been provided to aid in understanding the various embodiments. It is contemplated that future modifications in structure, function or result will exist that are not substantial changes and that all such insubstantial changes in what is claimed are intended to be covered by the claims. Thus, while preferred embodiments of the present inventions have been illustrated and described, one of skill in the art will understand that numerous changes and modifications can be made without departing from the claimed invention. In addition, although the term “claimed invention” or “present invention” is sometimes used herein in the singular, it will be understood that there are a plurality of inventions as described and claimed. Various features of the present inventions are set forth in the following claims.
56,332
11860241
DETAILED DESCRIPTION OF EMBODIMENTS FIG.1illustrates an exemplary test point adaptor1having a main body2and a test body11coupled with one another. The main body includes a first end3comprising a first interface4, for example, a swivel member or swivel nut, and a second end5comprising a second interface6. A first center conductor7is arranged in the main body2extending beyond the first end3. The test body11has a first test body end13connected with the main body2, for example, via a screw connection, and a second test body end14comprising a third interface15. In the illustrated embodiment of the test point adaptor1, the first interface4is a male threaded interface and the second interface6is a female threaded interface. However, it should be appreciated that in some embodiments, both of the first and second interfaces4,6may be female threads, both may be male threads, or they may be other kinds of engaging means. Referring now toFIG.2, the first center conductor7is arranged along a longitudinal axis8of the main body2. The first center conductor7is kept substantially in the center of the main body2by a first seizure9and a second seizure10. The test body11is coupled with the main body2such that a longitudinal axis12of the text body11is substantially perpendicular to the longitudinal axis8of the main body2. It should be appreciated that the test body11may be coupled with the main body2in other ways than substantially perpendicular to the main body2, for example, at an angle such as 15°-90°, such as 25°-80°, such as 35°-70°, or such as 35°-55°. As mentioned above, the first test body end13of the test body11may be threadably coupled with the main body2. A seizure16is mounted at the first test body end13. The seizure16is provided with an annular projection17. A spring19is arranged between the annular projection17of the seizure16and an end rim18of the first test body end13. The seizure16is arranged so as to be able to move along the longitudinal axis12of the test body11. The spring19, for example, an annular spring washer, biases the seizure16along the longitudinal axis12of the test body11in the direction away from the second end14of the test body11. A contact member20is inserted in a central seizure aperture21of the seizure16. The contact member20is provided with a central aperture22for receiving a first end24of a resistor23or similar. Having positioned the first resistor end24in the central aperture21of the seizure16, the central aperture22of the contact member20is arranged to receive the first resistor end24. When the contact member20is inserted in the central aperture21of the seizure16, the contact member20clamps around the first resistor end24. Thus, the first resistor end24is kept in position and the contact member20is fixed in the central aperture of the seizure16. As the contact member20is electrically conductive, the resistor23is in electrical contact with objects being in contact with the contact member20. The resistor23extends internally in the test body11along the longitudinal axis12. At the second end14of the test body11, the resistor23is kept in position by a gripping arrangement25. The gripping arrangement25is provided with a central aperture26. The central aperture26of the gripping arrangement25is arranged so as to receive a second end27of the resistor23. The gripping arrangement25is electrically conductive so as to facilitate that the center pin of test equipment (not shown) may be inserted into the test body11in electrical contact with the resistor23. The second end14of the test body11is terminated by a removable cap28comprising a terminator29, for example, a resistor, between a signal and ground. The terminator29is configured to provide electrical termination of a signal to prevent an RF signal from being reflected back from the second end14of the test body11, causing interference. The cap28is slidably coupled with the third interface15of the second end14of the test body11. Further, in order to achieve watertight connections, the test point adaptor1is provided with sealing members31,32,33, for example, O-rings. Referring now toFIGS.3and4, the test body11includes an outer conductive sleeve40having a conical contact surface41at the third interface15. The third interface15also includes a nonconductive sleeve42, for example, a plastic sleeve, concentrically coupled with the outer conductive sleeve40and surrounds the gripping arrangement25within the test body11. The nonconductive sleeve42is mechanically coupled with the outer sleeve40such that the sleeves40,42are not axially slidable relative to one another. The nonconductive sleeve42includes a tapered opening43configured to assist with insertion of a lead30of the terminator29into the central aperture26of the gripping arrangement25. It should be understood that the gripping arrangement25may comprise a slotted sleeve, prongs, or any other gripping member that is capable of maintaining a forcible connection so as to ensure electrical continuity between the resistor23and either the terminator29or test equipment (not shown). As best illustrated inFIG.4, the cap28includes a sleeve45configured to matingly engage an outer surface46of the outer conductive sleeve40. The sleeve45of the cap28includes slots46extending in the direction of the longitudinal axis12. As a result of the slots46, the cap sleeve45can be manufactured with an inside diameter that is slightly smaller than the outside diameter of the outer sleeve40. Thus, when the cap sleeve45is slidably coupled with the outer sleeve40, the cap sleeve45is expanded to receive the outer sleeve40, and the cap sleeve45provides a biasing force against the outer sleeve40to provide electrical continuity between the cap28and the outer conductive sleeve40. The cap28also includes an annular groove50in an inner surface of the cap sleeve45. The annular groove50is configured to receive a sealing member51, for example, an O-ring. The sealing member51is configured to engage the outer surface46of the outer conductive sleeve40when the cap28is matingly engaged with the outer surface46of the outer sleeve40to ensure a watertight connection at the third interface15. As shown inFIGS.3and4, an endmost region47of the outer surface46of the outer sleeve40may have an outside diameter that is smaller than a region48of the outer surface46that engages the cap sleeve45. As a result, when the cap28is coupled with the outer sleeve40, the sealing member51may be configured to engage the outer surface46to achieve the watertight connection, while the cap sleeve45will not matingly engage the endmost region47to avoid possible damage to and/or deterioration of the connection. The cap28also includes a conical contact surface49configured to engage with the conical contact surface41of the outer conductive sleeve40when the cap28is matingly engaged with the outer sleeve40. The conical contact surfaces41,49provide a longer engagement interface between the cap28and the outer sleeve40than conventional caps that provide radial (i.e., non-conical) contact surfaces. Thus, the RF signal is less likely to escape at the third interface, despite only providing a sliding connection between the cap28and the outer sleeve40(i.e., instead of a threaded connection). AlthoughFIGS.3and4illustrate the conical contact surface40tapering radially inward and the conical contact surface49tapering radially outward, it should be understood that in some embodiments, the conical contact surface40may taper radially outward and the conical contact surface49may taper radially inward. The described embodiment of the test body11and its components provide electrical contact between a test instrument (not shown) connected at the second end14of the test body11, which is in turn electrically connected with the contact member20. The contact member20is in contact with the first center conductor8arranged in the main body2. Further details of the seizure16, the contact member20, the spring19, and other features of the test point adaptor1, as well as mounting of the test point adaptor1on a component, are described in PCT International Publication Number WO 2011/079196, which is incorporated herein by reference. Referring toFIG.5, in some aspects of the test point adaptor1, the outer cap sleeve45may include an annular ridge55(or a series of intermittent ridges arranged annularly). The region48of the outer surface46of the outer conductive sleeve40that engages the cap sleeve45may include an annular groove56that is configured to matingly receive the annular ridge55. The annular ridge55and the annular groove56may be positioned on the cap sleeve45and outer sleeve40, respectively, to provide a positive connection force between the cap sleeve45and the outer sleeve40. As a result, the conical contact surfaces41,49are urged against one another with a force when the cap sleeve45and the outer sleeve40are matingly connected to ensure electrical continuity. The annular ridge55and the annular groove56may provide tactile feedback to a user as to when the cap sleeve45and the outer sleeve40are matingly connected and may also help prevent the cap sleeve45and the outer sleeve40from sliding apart. Referring now toFIG.6, in some aspects of the test point adaptor1, the outer sleeve40may include a tapered outer surface. For example, the outer surface40may be tapered from point66toward a shoulder67of the outer sleeve40. That is, the outside diameter of the outer sleeve45may taper from point66to shoulder67. As discussed above, the cap sleeve45can be manufactured with an inside diameter that is slightly smaller than the outside diameter of the outer sleeve40. For example the cap sleeve may have an inside diameter that is slightly smaller than the outside diameter of the outer sleeve40at a point along the outer sleeve40that is between point66and shoulder67. Thus, when the cap sleeve45is slidably coupled with the outer sleeve40, the cap sleeve45is expanded to receive the outer sleeve40, and the cap sleeve45provides a biasing force against the tapered region of the outer sleeve40to provide electrical continuity between the cap28and the outer conductive sleeve40. As a result, the conical contact surfaces41,49are urged against one another with a force when the cap sleeve45and the outer sleeve40are matingly connected to ensure electrical continuity. The tapered region of the outer sleeve40may cooperated with the cap sleeve45to help prevent the cap sleeve45and the outer sleeve40from sliding apart. FIG.7illustrates another exemplary test point adaptor70having a body71including a first end72that includes a first interface76, for example, a swivel member or swivel nut, and a second end73including a second interface78. The test point adaptor70could be connected to an amplifier via a free port without requiring opening of the adaptor. A center conductor74is arranged in the body71extending beyond the first end72. In the illustrated embodiment of the test point adaptor70, the first interface76is a male threaded interface. Referring toFIG.8, which is a cross-sectional view of test point adaptor70, the center conductor74is arranged along a longitudinal axis75of the body71. The center conductor74is kept substantially in the center of the body71by a seizure79. A contact member80is inserted into an opening in an end of the center conductor74. The contact member80is arranged to receive a first resistor end82of resistor83. When the contact member80is inserted in the opening in the end of the center conductor74. The contact member80clamps around the first resistor end82. Thus, the first resistor end82is kept in position and the contact member80is fixed in the opening in the end of center conductor74. As the contact member80is electrically conductive, the resistor83is in electrical contact with objects being in contact with the contact member80. In some embodiments, a gripping arrangement25, as shown inFIGS.2and3, may be arranged to receive a second resistor end84of the resistor83. The gripping arrangement25is electrically conductive so as to facilitate a center pin of test equipment (not shown) being inserted into the second end73in electrical contact with the resistor83. The second end73of the body71is terminated by a removable cap88comprising a terminator89, for example, a resistor, between a signal and ground. The terminator89is configured to provide electrical termination of a signal to prevent an RF signal from being reflected back from the second end73of the body71, causing interference. The cap88is slidably coupled with the second interface78of the second end73of the body71. Further, in order to achieve watertight connections, the test point adaptor70is provided with sealing members85,86, for example, O-rings. The body71includes an outer conductive sleeve90having a conical contact surface91at the second interface78. The second interface71also includes a nonconductive sleeve92, for example, a plastic sleeve, concentrically coupled with the outer conductive sleeve90and surrounding the gripping arrangement within the body71. The nonconductive sleeve92is mechanically coupled with the outer sleeve90such that the sleeves90,92are not axially slidable relative to one another. It should be understood that the gripping arrangement may comprise a slotted sleeve, prongs, or any other gripping member that is capable of maintaining a forcible connection so as to ensure electrical continuity between the resistor83and either the terminator89or test equipment (not shown). As shown inFIG.8, the cap88includes a sleeve93configured to matingly engage an outer surface94of the outer conductive sleeve90. The sleeve93of the cap88includes slots, similar to slots46, but extending in the direction of the longitudinal axis75. As a result of the slots, the cap sleeve93can be manufactured with an inside diameter that is slightly smaller than the outside diameter of the outer sleeve90. Thus, when the cap sleeve93is slidably coupled with the outer sleeve90, the cap sleeve93is expanded to receive the outer sleeve90, and the cap sleeve93provides a biasing force against the outer sleeve90to provide electrical continuity between the cap88and the outer conductive sleeve90. The cap88also includes an annular groove95in an inner surface of the cap sleeve93. The annular groove95is configured to receive a sealing member85, for example, an O-ring. The sealing member85is configured to engage the outer surface of the outer conductive sleeve90when the cap88is matingly engaged with the outer surface of the outer conductive sleeve90to ensure a watertight connection at the second interface78. As shown inFIG.8, an endmost region97of the outer surface of the outer conductive sleeve90may have an outside diameter that is smaller than a region96of the outer surface that engages the cap sleeve93. As a result, when the cap88is coupled with the outer conductive sleeve90, the sealing member85may be configured to engage the outer surface of outer conductive sleeve90to achieve the watertight connection, while the cap sleeve93will not matingly engage the endmost region97to avoid possible damage to and/or deterioration of the connection. The cap88also includes a conical contact surface99configured to engage with the conical contact surface91of the outer conductive sleeve90when the cap88is matingly engaged with the outer conductive sleeve90. The conical contact surfaces91,99provide a longer engagement interface between the cap88and the outer conductive sleeve90than conventional caps that provide radial (i.e., non-conical) contact surfaces. Thus, the RF signal is less likely to escape at the second interface, despite only providing a sliding connection between the cap88and the outer conductive sleeve90(i.e., instead of a threaded connection). AlthoughFIG.8illustrates the conical contact surface91tapering radially inward and the conical contact surface99tapering radially outward, it should be understood that in some embodiments, the conical contact surface91may taper radially outward and the conical contact surface99may taper radially inward. The described embodiment of the body71and its components provide electrical contact between a test instrument (not shown) connected at the second end73of the body71, which is in turn electrically connected with the contact member80. The contact member80is in contact with the center conductor74arranged in the body71. In some embodiments of the test point adaptor70, the outer cap sleeve93may include an annular ridge55(or a series of intermittent ridges arranged annularly) as shown inFIG.5with respect to a different embodiment. Similarly, a region of the outer conductive sleeve90of the test point adaptor70that engages the cap sleeve93may include the annular groove56that is configured to matingly receive the annular ridge55, as shown in the embodiment ofFIG.5. The annular ridge55and the annular groove56may be positioned on the cap sleeve93and outer conductive sleeve90, respectively, to provide a positive connection force between the cap sleeve93and the outer conductive sleeve90. As a result, the conical contact surfaces91,99are urged against one another with a force when the cap sleeve93and the outer sleeve90are matingly connected to ensure electrical continuity. The annular ridge55and the annular groove56may provide tactile feedback to a user as to when the cap sleeve93and the outer conductive sleeve90are matingly connected and may also help prevent the cap sleeve93and the outer sleeve90from sliding apart. As shown inFIG.6of an earlier described embodiment, the outer conductive sleeve90of test point adaptor70also may be tapered from a point on the outer conductive sleeve90in contact with cap sleeve93. That is, the outside diameter of the outer conductive sleeve90may taper from the point66to a shoulder67in a same manner as shown inFIG.6. The cap sleeve93may be manufactured with an inside diameter that is slightly smaller than the outside diameter of the outer conductive sleeve90. Thus, when the cap sleeve93is slidably coupled with the outer conductive sleeve90, the cap sleeve93is expanded to receive the outer conductive sleeve90, and the cap sleeve93provides a biasing force against the tapered region of the outer conductive sleeve90to provide electrical continuity between the cap88and the outer conductive sleeve90. As a result, the conical contact surfaces91,99are urged against one another with a force when the cap sleeve93and the outer sleeve90are matingly connected to ensure electrical continuity. The tapered region of the outer conductive sleeve90may cooperate with the cap sleeve93to help prevent the cap sleeve93and the outer conductive sleeve90from sliding apart. Additional embodiments include any one of the embodiments described above, where one or more of its components, functionalities or structures is interchanged with, replaced by or augmented by one or more of the components, functionalities or structures of a different embodiment described above. It should be understood that various changes and modifications to the embodiments described herein will be apparent to those skilled in the art. Such changes and modifications can be made without departing from the spirit and scope of the present disclosure and without diminishing its intended advantages. It is therefore intended that such changes and modifications be covered by the appended claims. Although several embodiments of the disclosure have been disclosed in the foregoing specification, it is understood by those skilled in the art that many modifications and other embodiments of the disclosure will come to mind to which the disclosure pertains, having the benefit of the teaching presented in the foregoing description and associated drawings. It is thus understood that the disclosure is not limited to the specific embodiments disclosed herein above, and that many modifications and other embodiments are intended to be included within the scope of the appended claims. Moreover, although specific terms are employed herein, as well as in the claims which follow, they are used only in a generic and descriptive sense, and not for the purposes of limiting the present disclosure, nor the claims which follow.
20,220
11860242
The figures are not necessarily to scale. Like numbers used in the figures refer to like components. However, it will be understood that the use of a number to refer to a component in a given figure is not intended to limit the component in another figure labeled with the same number. DESCRIPTION Embodiments described in this disclosure involve optical monitoring systems for power grid components. The impact of manufacturing imperfections, structural degradation, equipment failures, capacity limitations, and natural accidents and catastrophes, which cause power disturbances and outages, can be reduced by online system condition monitoring and diagnostics. The recent increase in distributed energy resources (DER) in the form of plug-in electric vehicles (PEVs), renewable energy and other alternative energy sources also presents new challenges, such as power-grid integration, power system stability, congestion, atypical power flows, and energy storage gaps. There is a growing need for intelligent and low-cost monitoring and control with online sensing technologies to maintain safety, reliability, efficiency, and uptime of the power grid. However, harsh and complex electric-power-system environments pose great challenges for low-cost sensing in smart-grid applications. Specifically, electrical sensors may be subject to radio frequency interference (RFI), highly caustic/corrosive environments, high moisture/humidity levels, vibrations, dust, or other conditions that challenge performance and/or greatly increase cost. While wireless sensor networks (WSNs) have been explored as a low-cost option in this regard, electromagnetic interference (EMI) effects make it difficult to monitor their communication link quality, thereby limiting usage of WSNs for grids. WSNs also offer additional vulnerabilities to cyber threats. Embodiments described in this disclosure involve optical monitoring systems for power grid components. The optical monitoring approaches described herein can be used in a monitoring system that monitors any type of power grid component and/or multiple types of power grid components. For example a monitoring system according to the disclosed approaches may monitor electrical power grid components such as power distribution transformers, power transmission transformers, power grid switches, capacitors, relays and/or other power grid components. Among the power grid components of particular interest, transformers are one of the more expensive pieces of equipment found in a distribution network. Power transmission transformers are designed to step up the voltage from the power distribution plant for long range transmission. Power distribution transformers step down the high voltage from transmission levels to deliver power from high voltage transmission networks to customers. Being relatively simple in construction and at the same time mechanically robust, they offer a long service life. Transformer sustainability has become a growing challenge due to transformer aging and the ongoing trend to supply a growing number of non-linear and variable DER loads through the power transformers. Growing uncertainties in transformer aging result from variable loads and other system complexities due to increasingly high levels of DER. Variable and non-linear loads can be a factor that accelerates transformer aging. For example, battery chargers for PEVs are high-power devices that employ nonlinear switching which could result in significant harmonic voltage and currents injected into the distribution system. Fast charging, the preferred technique to accelerate PEV adoption, implies precisely these types of nonlinear loads. Simulation models have suggested that some high levels of DER adoption scenarios (such as large numbers of PEVs being fast-charged simultaneously) can significantly accelerate transformer aging. Other types of distributed generation (DG), such as rooftop photovoltaics can possibly extend transformer life in radial networks by relieving them of their peak loads at low to moderate levels of penetration. However, studies suggest that as DG penetration increases, voltage limit violations at transformer secondaries in mesh network-type power distribution systems (common in large metro areas) become increasingly probable. In transformer designs, the use of oil as an insulation material has become ubiquitous in light of oil enabling superior electrical performance with low losses. However, the flammability of oil-filled transformers can pose major public safety risks, particularly in underground installations as they age and become less robust to transient over-voltages or other internal failure mechanisms. Thus, a need is emerging for low-cost sensing to monitor key internal parameters in transformers, particularly in distribution transformers, for reliable predictions of degradation and/or impending failures. FIG.1is a simplified diagram of a power grid100. The power grid100includes some type of power generator105that generates power for the grid, e.g., through burning coal or natural gas, hydroelectric, nuclear, wind, photovoltaics, or other types of power generation. The output voltage from the power generator105may be stepped up by transformers at a transmission substation110and carried by high voltage transmission lines111to one or more power distribution substations120. The voltage is stepped down by power distribution transformers at the power distribution substations120and is provided to houses130and/or other facilities connected to the power grid100. Embodiments discussed in this disclosure are directed to optical systems for monitoring power grid components. For example, the power distribution substations120may include one or more optical monitoring systems for power distribution transformers in accordance with embodiments discussed herein. The transmission substation110may include one or more optical monitoring systems for power transmission transformers. Although the approaches for power grid monitoring are explained in this disclosure using the example of power transformers as the monitored power grid components, it will be appreciated that the approaches are equally applicable to other components of the power grid. FIG.2depicts an optical monitoring system200that may be arranged to monitor power transformers205located at a power grid substation, in accordance with some embodiments. The optical monitoring system200includes one or more power transformer monitors220. Each monitor220includes a plurality of optical sensors222disposed on one or more optical fibers221. Each optical sensor222is disposed at a location within or on a corresponding power transformer and is configured to sense parameters of the power transformer205. The parameters sensed may be internal parameters, such as strain, temperature, vibration, chemistry, or operational parameters, such as voltage and current. In some embodiments, each optical sensor may sense a different parameter of the transformer than other optical sensors monitoring the same transformer. In some scenarios two or more of the optical sensors monitoring a transformer may sense the same parameter, for example, to achieve an average of the sensed parameter or to sense the same parameter at different locations of the transformer. In the embodiment depicted inFIG.2, each transformer205is monitored by multiple sensors222disposed on a single optical fiber221. Alternatively, a single transformer may be monitored by multiple sensors disposed on multiple optical fibers and/or multiple transformers may be monitored by multiple sensors disposed on a single optical fiber. The monitoring system200includes control circuitry210optically coupled to the optical fibers221of the transformer monitors220. In various embodiments, the control circuitry may be arranged for receiving optical output signals from the optical monitors of one, some, or all of the transformers205of the substation. The control circuitry210includes a light source211that provides input excitation light to the optical sensors222. Each of the sensors222reflects a portion of the input light as sensor output light. The sensor output light exhibits wavelength shifts of the central wavelength of the sensor according to changes in the sensed parameters of the transformer. In the embodiment shown inFIG.2, the output light from each sensor222that monitors a transformer205is multiplexed onto a single optical fiber221. Thus, the output light from each of the sensors is multiplexed onto the optical fiber221. The control circuitry210includes an optical wavelength division demultiplexer212that spatially distributes the output light carried on the optical fiber221. A detector unit215comprising one or more photodetectors converts the output light into electrical signals representative of the sensed parameters of the transformer. The wavelength shifts associated with the sensed parameters can be small compared to spacing between the central wavelengths of the sensors. Therefore, it is feasible to separate the optical signals from the individual sensors, referred to as the component signals, using the wavelength division demultiplexer, which may comprise a linear variable filter, arrayed waveguide grating (AWG), or other wavelength dispersive optical element. Alternatively or additionally, a time-domain multiplexing scheme can be employed that operates by exciting short pulses of light in the optical fiber which selectively addresses each of the various sensors. Using various multiplexing configurations, e.g., wavelength division multiplexing/demultiplexing and/or time division multiplexing/demultiplexing, several thousand sensors can be monitored by a single detection unit as described in more detail below. In some embodiments, the control circuitry210includes an analyzer216configured to analyze the electrical signals generated by the detector unit215. The analyzer may be a processor configured to predict, detect, and/or diagnose one or more functional, state, and/or degradation conditions based on analysis of the electrical signals. Cybersecurity is important for power grid systems. In some embodiments, the monitoring system200may include one or more optical sensors217coupled to the optical fibers218and configured to monitor the optical signals carried on the optical fibers218for unusual signal anomalies on that are not attributable to transformer parameters. These security sensors217can provide an alert to attacks or other breaches of security. The additional sensors for cybersecurity and/or breach detection may be coupled to the optical fibers218within the control circuitry210as shown, may be coupled to the optical fibers221, and/or may cybersecurity and/or breach detection optical sensors may be disposed at both locations. FIG.3provides a more detailed view of a monitoring system300in accordance with some embodiments. Multiple optical sensors, S1, S2, . . . SN, are arranged to respectively sense multiple internal parameters of the transformer301. Additional internal and/or external sensors may be arranged to monitor operational transformer parameters. For example, internal and/or external sensors may be configured to sense operational parameters of the transformer such as input current, output current, input voltage, output voltage. Optical sensors can be used to monitor a number of parameters. For example, the optical sensors S1, S2, . . . SN may be disposed within or outside the transformer301and configured to sense one or more transformer parameters such as temperature, core strain, vibration, presence of various chemicals, corrosion, presence of gas (including dissolved gas such as a hydrogen containing dissolved gas) partial discharge, pressure, current, voltage, and/or other transformer parameters. The sensors S1, S2, . . . SN may comprise any type (or multiple types) of optical sensor, including Fiber Bragg Grating (FBG) sensors and/or etalon or Fabry-Perot (FP) sensors. Both the FBG and etalon/FP sensors are collectively referred to herein as optical sensors or fiber optic sensors. Although some examples provided herein are based on FBG sensors, it will be understood that other types of optical sensors could alternatively or additionally be used in these and other embodiments. Fiber optic sensors offer many advantages over their electrical counterparts. They are thin, (typically about 100-200 μm) in diameter, lightweight, sensitive, robust to harsh environments, and immune to EMI. Fiber optic sensors can simultaneously measure multiple parameters with high sensitivity in multiplexed (muxed) configurations over long optical fiber cables. Fiber optic sensors have demonstrated robustness to various harsh environments, including long-term (5+ years) exposure to oil-soak environments, as shown for downhole sensing. The most common fiber optic material is silica, which is corrosion resistant, can withstand 1 GPa tension for more than five years, survive, between −200° C. and 800° C., and has a dielectric breakdown strength greater than 470 kV/mm. Various types of plastic are also useful for optical fibers and optical sensors. Fiber optic sensors such as FBG sensors are mechanically robust with respect to shock and vibration. Thus, embedded fiber optic sensors in transformers offer an attractive solution to reliably measure and monitor relevant parameters. In addition, the immunity of optical fiber cables to EMI and radio frequency interference (RFI) make it a particularly suitable communication medium for high voltage operating environments in substations and over long distances across the grid. Thus, the multifunctional nature of optical fiber cables can be exploited to combine sensing, communications, shielding, and lightning protection functions in power systems. FBG sensors can be formed by a periodic modulation of the refractive index along a finite length (typically a few mm) of the core of the optical fiber. In some embodiments the periodic modulation can be inscribed on the fiber optic through direct writing using femtosecond lasers. The modulation pattern reflects a wavelength, called the Bragg wavelength, that is determined by the periodicity of the refractive index profile of the FBG sensor. In practice, the sensor typically reflects a narrow band of wavelengths centered at the Bragg wavelength. The Bragg wavelength at a characteristic or base value of the external stimulus is denoted λ and light having a peak, center, or centroid wavelength λ (and a narrow band of wavelengths near λ) is reflected from the sensor when it is in a predetermined base condition. For example, the base condition may correspond to 25 degrees C. and/or zero strain. When the sensor is subjected to stimulus, the stimulus changes the periodicity of the grating and the index of refraction of the FBG, and thereby alters the reflected light so that the reflected light has a peak, center, or centroid wavelength, λs, different from the base wavelength, λ. The resulting wavelength shift, Δλ/λ=(λ−λs)/λ is a proxy measure of the stimulus. FBG sensors may be sensitive to changes in refractive index n, strain ε1, and ambient temperature changes ΔT, for example. The refractive index n can be made sensitive to the chemical environment of the sensor by stripping the optical fiber cladding over the sensor element region and/or by adding appropriate coatings to this sensitive area. Strain and temperature shift the output wavelength of the sensor due to changes in the periodicity of the grating. The relation between wavelength shift (Δλ/λ) and simultaneous strain and temperature in an FBG sensor is: Δλ/λ={1−n2/2[p12−n(p11+p12)]}ε1+[α+1/n(dn/dT)]ΔT[1] where n is the index of refraction, p11and p12are strain-optic constants, ε1is longitudinal strain, α is the coefficient of thermal expansion and Tis the temperature. In some implementations, by using multiple FBG sensors that are differently affected by strain and temperature (due to design or mounting), dual fibers or special FBG sensors in combination with data evaluation algorithms, the impacts from strain and temperature on the wavelength shift can be separated. For example, strain and temperature can be separated using a pair of adjacent FBGs at different wavelengths attached to the transformer. One of the two adjacent FBGs can be configured to be sensitive to thermal strain alone using thermally sensitive paste or by enclosing it in a special tubing. The measured wavelength shift of the “reference” FBG sensor in the tubing can be subtracted from the total wavelength shift of the adjacent FBG strain sensor for temperature compensation. As discussed above, fiber optic sensors are useful for sensing temperature and strain. Vibration can be detected as dynamic strain variations. With suitable coatings and configurations, FBGs and/or other optical sensors can be useful for monitoring current, voltage, chemical environment, and corrosion. For example, some parameters of interest can be mapped to a strain signal on the FBG through special coatings that undergo strain, typically in a linear relationship, in response to the parameter of interest. One or more immediately adjacent optical sensors may be used to compensate for the influence of confounding parameters, such as temperature and/or vibration effects, in order to recover the parameter of interest with high fidelity. For example, corrosion and/or moisture can be converted into strain signals using suitable coatings and/or by bonding the sensors or sensor coatings to structural components that undergo tensile strain with corrosion. As another example, chemical sensing can be accomplished by depositing specific chemically sensitive coatings that undergo strain in response to changing concentrations of the chemical species of interest. For example, Palladium (Pd) coatings undergo reversible strain in response to hydrogen-containing gases. Both transformer oil and cellulose have carbon-based molecular structures rich in hydrogen. The decomposition of oil and cellulose forms a large number of byproducts, including combustible and noncombustible gases. Hydrogen is naturally present in most of those compounds. Up to 0.05% volume H2and short-chain hydrocarbons gas concentration can be an acceptable level for healthy transformers. Optical sensors with Pd coating are useful for detecting hydrogen-based gases. Hydrogen gas sensing with FBGs in free air suggest that Pd-coated FBGs may have about 7 picometer (pm) wavelength shift response for a 1% volume H2gas concentration change with a response time of about 5 minutes, without accounting for thermal effects. A similar or greater response sensitivity may be achieved for hydrocarbons. With a detection unit resolution of 50 femtometer (fm), a resolution of 0.01-0.02% H2may be achieved in free air, after accounting for thermal effects. Similar resolution levels may be achievable for dissolved H2or H-containing gas in oil, enabling a target resolution of about 250 ppm dissolved gas detection. In some embodiments, the monitoring system disclosed herein can be used for detecting partial discharge of a transformer. A partial discharge causes small electrical sparks to be present in an insulator as a result of the electrical breakdown of a gas (for example air) contained within a void or in a highly non-uniform electric field. The sudden release of energy caused when a partial discharge occurs produces a number of effects, such as chemical and structural changes in the materials surrounding the partial discharge location, electromagnetic signal generation and/or acoustic emission, e.g., in the 50-200 kHz frequency range. With the high frequency monitoring capability enabled by the approaches discussed herein, acoustic emission detection of fast (up to 1 MHz) dynamic strain signals (up to 1.45 fm/√{square root over ( )}GHz) from partial discharge acoustic emission may be achieved and used to detect the occurrence of and/or the severity of the partial discharge. In the embodiment shown inFIG.3, the sensors S1, S2, . . . SN are disposed on a single optical fiber330that is partially embedded within a transformer301. Each of the sensors S1, S2, . . . SN may operate within a different wavelength band from other sensors on the optical fiber330. For example, sensor S1may operate within a first wavelength band centered at wavelength λ1, sensor S2may operate within a second wavelength band centered at λ2, and sensor SN may operate within an Nth wavelength band centered at λN. Each wavelength band λ1, λ2, . . . λNmay be selected so that it does not substantially overlap with the wavelength bands of the other sensors. The monitoring system300includes control circuitry335comprising an input light source310, optical demultiplexer340, and detection unit350. In some embodiments, the control system includes an analyzer360implementing model-based algorithms362. Optical sensors S1, S2, . . . SN are optically coupled to the input light source310, which may be a broadband light source that supplies input excitation light across a broad wavelength band that spans the operating wavelength bands of the optical sensors S1, S2, . . . SN. Output light from optical sensors S1, S2, . . . SN is carried on optical fiber330to a wavelength domain optical demultiplexer340that spatially disperses light from the optical fiber330according to the wavelength of the light. In various implementations, the optical demultiplexer may comprise a linearly variable transmission structure and/or an arrayed waveguide grating, or other optically dispersive element. In configurations that include multiple transformers, the optical signals from each of the transformer monitors (which may each include sensors S1through SN) can be coupled through an optical time multiplexer (not shown inFIG.3) to the optical demultiplexer340. The use of optical time multiplexers is discussed in greater detail below. Light from the demultiplexer340is optically coupled to a detection unit350which may comprise one or more photodetectors. Each photodetector is configured to generate an electrical signal in response to light that falls on a light sensitive surface of the photodetector. The electrical signals generated by the photodetectors of the detection unit350are representative of the parameters sensed by sensors S1, S2, . . . SN. The optical demultiplexer340used in conjunction with the detection unit350allows the sensor signal from each of the sensors S1, S2, . . . SN to be individually detected. The electrical signals generated by the detection unit350can be used by the analyzer360to analyze (predict, detect and/or diagnose) one or more of a functional condition, a state, and/or a degradation condition of the power transformer301based on analysis of the electrical signals. Examples of a state of a power transformer can include the load level of the transformer or the temperature of the transformer. Examples of a functional condition includes actual age of transformer, expected time of service based on expected load levels, present load capacity, etc. Examples of a degradation condition include short circuit, excessive dissolved gases, partial discharge events, corrosion, etc. Predicting a state or condition is used herein to express making an estimate that the state or condition will happen at a future time. Prediction may involve an estimate of the future time that the state or condition is expected to occur. Detecting a state or condition involves detecting that the state or condition is currently present or absent. Diagnosing a state or condition may identify the degree to which the state or condition is present and/or may identify the cause or causes of the state or condition. In some embodiments, the analysis can be used to schedule maintenance and/or to control operation of the power transformer and/or other components of the power grid. The sensed parameters, as represented by the electrical signals from the sensors, can be used in conjunction with theoretical and/or empirical transformer models and model-based algorithms362for real-time estimation of the transformer state, various degradation conditions and/or various functional conditions, for example. The models can be adapted based on detected conditions of the transformer, measures of internal and/or external parameters and/or correlations between the operational conditions and measured parameters. The availability of real-time transformer state variables through the disclosed monitoring system can significantly alleviate many of the problems with grid asset monitoring and grid distribution management. The model-based algorithms can correlate sensed parameter values and/or trends with transformer degradation conditions. As one example, consider dissolved gas concentration which can be correlated to safety-critical and performance effects that occur due to degradation in the oil and insulation caused by high temperatures and/or other aging factors. Gas evolution is exacerbated in the presence of other transformer faults such as partial discharges. Thus, dissolved gas levels are reflective of long-term changes in the transformer health due to high temperatures (ambient or from high load operation), cycling under variable distributed energy resource loads, and storage. The monitoring system disclosed herein can provide information about transformer degradation based on dissolved gas sensing. The algorithms executed by the analyzer may take into account trends of dissolved gas sensing as well as temperature and/or cycling trends to make predictions about a future degradation state of the transformer and/or the rate of transformer degradation. As an additional example, consider another parameter of potential interest, coil strain. Coil strain can be separated into two factors: (a) ohmic and hysteresis-related heating leading to thermal expansion, and (b) magnetostrictive elastic (magnetoelastic) deformation induced by the load level within the core. Because thermal expansion is a slower process than magnetoelastic deformation from the core expansion cycles, mechanical equilibrium is established much faster than thermal. The thermal strain can be isolated from the magnetoelastic deformation using a tubing, for example, as mentioned earlier. As an alternative implementation, core thermal expansion can be modeled. Heat generated by hysteresis losses and electrical resistance in windings produces repetitive thermal expansion and contraction of the materials. The optically sensed temperature may be used as an input to the thermal strain model to determine the temperature induced strain. This value can be subtracted from the total strain to isolate magnetoelastic strain. Isolation of the thermal strain can allow the residual magnetoelastic strain to act as a snapshot of the load level of the transformer. Core in-plane strain values in the range of about 5-50με can be expected based on typical results from numerical simulations. With higher distributed energy resource penetration leading to more variable loading conditions, the response behavior of the coil strain under inrush currents can be used to predict the transformer's ability to function reliably under a range of variable DER scenarios, including two-way flows from high levels of distributed generation. Inelastic strain behavior, acoustic emission, vibrations, and/or dynamic oscillations may be generated during partial discharge or coil short circuit events. Partial discharge and short circuits can be detected based on sensing inelastic strain, acoustic emission, vibrations, and/or dynamic oscillations. Unusual vibrations can also result from core structural issues. Thus, parameters such as coil strain and/or vibration, which change with loads, can correlate to loading on the transformer while dynamic events offer incipient failure indications. It is possible for mechanical stresses originating from the grid (e.g. higher harmonics in loads) or the operating environment (e.g. seismic events or neighboring construction activity) to be transmitted to the transformer core through the transformer mounts. These stresses might induce additional strains and sensor readings that are not accounted for by the model and confound the parameters sensed by the sensors. A control optical strain sensor can be placed on the transformer enclosure. The output of the control sensor can be used to compensate sensed parameters signals of interest from external sources of strain. Optically sensing changes in magnetoelasticity, dissolved gas evolution, incidence of partial discharge events and/or other parameters, such as those discussed herein, and trending the parameters over time can give useful metrics for transformer health and prognosis. For example, present values of one or more parameters and/or the rates of change of trends of the one or more parameters can be compared to threshold present values and/or trend values (e.g., slopes) as an indication of transformer health and/or to predict the likelihood of a degradation state and/or safety event, e.g., such as a transformer coil short circuit. A probabilistic regression analysis, such as relevance vector machines, can be applied to a machine learning approach to develop the models employed by the model-based algorithms for the detection, prediction, and/or diagnosis of the transformer operational state. The machine learning algorithms can collect data via laboratory training conditions and/or conditions experienced by transformers deployed in the field. The machine learning algorithms employed may use probabilistic kernels to reject the effects of outliers and the varying number of data points under different operational conditions that can bias conventional curve fitting methods. The probabilistic techniques can also leverage Bayesian learning to manage system uncertainty. The models and/or model-based algorithms may be adapted over time through continued machine learning. A variety of filtering techniques are applicable here. Efficient non-linear filters that combine Bayesian learning with importance sampling to provide good state-tracking performance are suitable for this task. The model-based algorithms that are tuned during the tracking phase, can then be propagated for expected loads to give short or long-term prognosis for the transformer. In some scenarios, information acquired or developed by the analyzer360may be provided to an operator via an electronic or printed report. For example, the analyzer360may compile, analyze, trend, and/or summarize the sensed parameters, and/or may perform other processes using the sensed parameters as input, such as predicting and/or diagnosing the state of the transformer301. The results of these analyses and/or other information derived from monitoring the transformer301may be provided in a report that can be displayed graphically, textually and/or in any convenient form to an operator and/or may be provided to another computer system for storage in a database and/or further analysis and/or to update the predictive models and/or model-based algorithms. In some configurations, the information derived from the transformer monitoring can be provided to the operator of the power grid through a graphical user interface that includes a dashboard361presented on a display. The display dashboard allows for accessing and configuring reports and/or graphs regarding the status of individual transformers, multiple transformers and/or other grid components. In some embodiments one or more of the optical demultiplexer, detection unit and analyzer can be implemented as an integrated component at a substation which is interoperable with substation automation systems (SAS). The integrated component can handle one or more multiplexed embedded optical sensors within one or more power transformers. Optical sensor-based sensing as illustrated inFIG.3allows for incorporating multiple sensing elements, e.g., about 8 sensors, on a single optical fiber. In some approaches, each of the sensors S1, S2, . . . SN can be individually interrogated through wavelength domain multiplexing and demultiplexing. In some approaches, as illustrated below, sensors disposed in multiple sensor modules can be individually interrogated through a combination of time domain multiplexing and wavelength domain multiplexing and demultiplexing. In some implementations, both ends of the sensor waveguide330disposed within a transformer may be optically coupled to the light source310and the optical demultiplexer340through optical switches (not shown inFIG.3). Coupling both ends of the optical fiber may be useful in the event of a broken optical fiber. For example, consider the scenario wherein the optical fiber330breaks in two portions between sensors S1and S2, but both ends of the optical fiber330are connected to the light source310and optical coupler340via optical switches. In this example, an optical fiber initially included all the sensors S1through SN, but after the breakage, sensors S1through SN can be considered to be disposed on two FO cables. Even with the broken optical fiber, all sensors S1through SN remain accessible through the two portions of the optical fiber if both ends of the optical fiber are selectably optically coupled to the light source310and optical demultiplexer340through an optical switch. The sensors on each portion of the broken optical fiber are accessible by time multiplexing the signal from the optical fiber portions. In the scenario outlined above, the signal from sensor S1would be accessible through a first portion of broken optical fiber when the optical switches are in the first state and the signals from sensors S2through SN would be accessible through the second portion of the broken optical fiber when the optical switches are in the second state. In some embodiments the analyzer360may be capable of detecting that an optical fiber is broken, e.g., based on an absence of a signal at the wavelengths of the inaccessible sensors. If the analyzer detects a broken optical fiber, the analyzer may initiate monitoring of all sensors of the optical fiber through both portions of the broken optical fiber. Coupling both ends of the optical fiber may be useful in the implementation wherein only one sensor is disposed on the optical fiber. For example, consider the scenario wherein the optical fiber only includes S1. If the optical fiber breaks between the light source and optical demultiplexer and S1, then S1would be inaccessible unless both ends of the FO cable are optically coupled to the light source and optical demultiplexer as discussed above. Turning now toFIG.4, the operation of a monitoring system that monitors multiple parameters of a transformer with sensor outputs multiplexed using optical wavelength division multiplexing and demultiplexing is illustrated. Broadband light is transmitted by the light source410, which may comprise or be a light emitting diode (LED) or superluminescent laser diode (SLD), for example. The spectral characteristic (intensity vs. wavelength) of the broadband light is shown by inset graph491. The light is transmitted via the optical fiber411to the first FBG sensor421. The first FBG sensor421reflects a portion of the light in a first wavelength band having a central or peak wavelength, λ1. Light having wavelengths other than the first wavelength band is transmitted through the first FBG sensor421to the second FBG sensor422. The spectral characteristic of the light transmitted to the second FBG sensor422is shown in inset graph492and exhibits a notch at the first wavelength band centered at λ1indicating that light in this wavelength band is reflected by the first sensor421. The second FBG sensor422reflects a portion of the light in a second wavelength band having a central or peak wavelength, λ2. Light that is not reflected by the second FBG sensor422is transmitted through the second FBG sensor422to the third FBG sensor423. The spectral characteristic of the light transmitted to the third FBG sensor423is shown in inset graph493and includes notches centered at λ1and λ2. The third FBG sensor423reflects a portion of the light in a third wavelength band having a central or peak wavelength, λ3. Light that is not reflected by the third FBG sensor423is transmitted through the third FBG sensor423. The spectral characteristic of the light transmitted through the third FBG sensor423is shown in inset graph494and includes notches centered at λ1, λ2, and λ3. Light in wavelength bands481,482,483, having central wavelengths λ1, λ2and λ3(illustrated in inset graph495) is reflected by the first, second, or third FBG sensors421,422,423, respectively, along the FO cables412to the analyzer430. The analyzer430may compare the shifts in each the central wavelengths λ1, λ2and λ3and/or wavelength bands reflected by the sensors421-423to a characteristic base wavelength (a known wavelength) to determine whether changes in the parameters sensed by the sensors421-423have occurred. The analyzer430may determine that the one or more of the sensed parameters have changed based on the wavelength analysis and may calculate a relative or absolute measurement of the change. In some cases, instead of emitting broadband light, the light source may scan through a wavelength range, emitting light in narrow wavelength bands to which the various sensors disposed on the FO cable are sensitive. The reflected light is sensed during a number of sensing periods that are timed relative to the emission of the narrowband light. For example, consider the scenario where sensors1,2, and3are disposed on a FO cable. Sensor1is sensitive to a wavelength band (WB1), sensor2is sensitive to wavelength band WB2, and sensor3is sensitive to WB3. The light source may be controlled to emit light having WB1during time period1and sense reflected light during a time period1athat overlaps time period1. Following time period1a, the light source may emit light having WB2during time period2and sense reflected light during time period2athat overlaps time period2. Following time period2a, the light source may emit light having WB3during time period3and sense reflected light during time period3athat overlaps time period3. Using this version of TDM, each of the sensors may be interrogated during discrete time periods. The FO cable used for energy storage/power system monitoring may comprise a single mode (SM) FO cable or may comprise a multi-mode (MM) FO cable. While single mode fiber optic cables offer signals that are easier to interpret, to achieve broader applicability and lower costs of fabrication, multi-mode fibers may be used. A major challenge of FBG and other wavelength-based FO sensors is that the obtained wavelength shifts are typically very small. Sub-picometer wavelength measurement resolution is the key for achieving high sensitivity. At the same time, it is desirable to maintain this capability over a wide spectral range. Additionally, high-speed detection enables monitoring of higher frequency vibration/acoustic signals. The detection units described herein use wavelength shift detectors that can resolve wavelength shifts as small as 50 femtometers, for example. In some embodiments, the detector unit comprises position-sensitive photodetectors and the optical demultiplexer comprises a detector coating that has laterally varying transmission properties, a laterally varying transmission structure (LVTS). The coating converts the wavelength information of the incident light into a spatial intensity distribution, which can be detected with high precision with a position-sensitive photodetector. Differential read-out of the photodetector allows the determination of the centroid of the light distribution. The approach used by the optical demultiplexer and detection unit converts wavelength shifts into a simple centroid detection scheme, allowing for higher resolution wavelength shift detection and cut off frequency for monitoring optical signals. As described in more detail in conjunction withFIG.5andFIG.6, in some embodiments, the output light from the monitor is routed through a linear optical variable filter which serves as the optical wavelength demultiplexer. Only wavelengths within a particular range are transmitted and collected by one or more photodetectors of the detection unit. The difference of the sensor signals renders the signal independent of the strength of the light source. This makes it relatively robust to noise source fluctuations. As a result, the output voltage is proportional to the spatial distribution of the light. FIG.5is a block diagram illustrating portions of the control circuitry500of a transformer monitoring system that may be used to detect and/or interpret optical signals received from an MM or SM FO cable having multiple optical sensors arranged at locations in, on or about a power transformer. The light source505provides input excitation light to the sensors via optical fiber506. The control circuitry500includes various components that may optionally be used to detect a shift in the wavelength of light reflected by the sensors and propagated by optical fiber510. The control circuitry500optionally includes a spreading component540configured to collimate and/or spread the light from the optical fiber510across an input surface of LVTS530. In arrangements where sufficient spreading of the light occurs from the optical fiber, the spreading component may not be used. The LVTS530may comprise a dispersive element, such as a prism, or linear variable filter. The LVTS530receives light at its input surface531(from the optical fiber510and (optionally) the spreading component540) and transmits light from its output surface532. At the output surface532of the LVTS530, the wavelength of the light varies with distance along the output surface532. Thus, the LVTS530serves to demultiplex the optical signal incident at the input surface531of the LVTS530according to the wavelength of the light. FIG.5shows two wavelength bands (called emission band) emitted from the LVTS530, a first emission band has a central wavelength of λaemitted at distance da from a reference position (REF) along the output surface532. The second emission band has a central wavelength kb and is emitted at distance dbfrom the reference position. A position sensitive detector (PSD)550is positioned relative to the LVTS530so that light transmitted through the LVTS530falls on the PSD. For example, light having wavelength λafalls on region a of the PSD550and light having wavelength kb falls on region b of the PSD550. The PSD generates an electrical signal along output551that includes information about the position (and thus the wavelength) of the light output from the LVTS. The output signal from the PSD is used by the analyzer560to detect shifts in the wavelengths reflected by the sensors. The PSD may be or comprise a non-pixelated detector, such as a large area photodiode, or a pixelated detector, such as a photodiode array or charge coupled detector (CCD). Pixelated one-dimensional detectors include a line of photosensitive elements whereas a two-dimensional pixelated detector includes an n×k array of photosensitive elements. Where a pixelated detector is used, each photosensitive element, corresponding to a pixel, can generate an electrical output signal that indicates an amount of light incident on the element. The analyzer560may be configured to scan through the output signals to determine the location and location changes of the transmitted light spot. Knowing the properties of the LVTS allows determining peak wavelength(s) and shift of the peak wavelength(s) of the first and/or second emission band. The wavelength shift of the first or second emission band can be detected as a shift of the transmitted light spot at location a orb. This can, for example, be accomplished by determining the normalized differential current signal of certain pixels or pixel groups of the PSD. For example, consider the example where light spot A having emission band EBAis incident on the PSD at location a. Ia1is the current generated in the PSD by light spot A by pixel/pixel group at location a1and Ia2is the current generated in the PSD by light spot A by pixel/pixel group at location a2. Light spot B having emission band EBBis incident on the PSD at location b. Ib1is the current generated in the PSD by light spot B by pixel/pixel group at location b1and Ib2is the current generated in the PSD by light spot B by pixel/pixel group at location b2. The normalized differential current signal generated by pixels or pixel groups at locations a1and a2can be written (Ia1−Ia2)/(Ia1+Ia2), which indicates the position of light spot A on the PSD. The wavelength of EBAcan be determined from the position of light spot A on the PSD. Similarly, the normalized differential current signal generated by pixels or pixel groups at locations b1and b2can be written (Ib1−Ib2)/(Ib1+Ib2), which indicates the position of light spot B on the PSD. The wavelength of EBBcan be determined from the position of light spot B on the PSD. FIG.6is a block diagram illustrating portions of the control circuitry600of a monitoring system that includes a non-pixelated, one-dimensional PSD650. The control circuitry600includes an optional spreading component640that is similar to spreading component540as previously discussed. The spreading component640is configured to collimate and/or spread the light from the optical fiber610across an input surface631of the LVTS630. In the implementation depicted inFIG.10, the LVTS630comprises a linear variable filter (LVF) that includes layers deposited on the PSD650to form an integrated structure. The LVF630in the illustrated example comprises two mirrors, e.g., distributed Bragg reflectors (DBRs)633,634that are spaced apart from one another to form optical cavity635. The DBRs633,634may be formed, for example, using alternating layers of high refractive index contrast dielectric materials, such as SiO2and TiO2. One of the DBRs633is tilted with respect to the other DBR634forming an inhomogeneous optical cavity635. It will be appreciated that the LVF may alternatively use a homogeneous optical cavity when the light is incident on the input surface at an angle. The PSD650shown inFIG.6is representative of a non-pixelated, one-dimensional PSD although two-dimensional, non-pixelated PSDs (and one or two-dimensional pixelated PSDs) are also possible. The PSD650may comprise, for example, a large area photodiode comprising a semiconductor such as InGaAs. Two contacts653,654are arranged to run along first and second edges of the semiconductor of the PSD to collect current generated by light incident on the surface of the PSD650. When a light spot699is incident on the PSD650, the contact nearest the light spot collects more current when compared to the contact farther from the light spot which collects a lesser amount of current. The current from the first contact653is denoted I1and the current from the second contact654is denoted I2. The analyzer660is configured to determine the normalized differential current, (I1−I2)/(I1+I2), the position of the transmitted light spot, and therefore the predominant wavelength of the light incident at the input surface631of the LVTS630can be determined. The predominant wavelength may be compared to known wavelengths to determine an amount of shift in the wavelength. The shift in the wavelength can be correlated to a change in the sensed parameter. In case two emission bands (creating two spatially separated light spots) hitting the detector at the same time the detector is only capable to provide an average wavelength and wavelength shifts for both emission bands. If wavelength and wavelengths shift of both emission bands need to be determined separately the two emission bands need to hit the detector at different time (time multiplexing). In other embodiments, a two dimensional non-pixelated PSD may be used, with edge contacts running along all four edges. The position of the central reflected wavelength may be determined by analyzing the current collected from each of the four contacts. The control circuitry (see element335ofFIG.3) is also referred to as a “read-out” and may be packaged with an onboard excitation light source as a photonic integrated circuit chip with a chip size between 30-60 mm2which can be disposed in a suitable housing, e.g., a TO5 transistor package. For example, a mass-production version of the control circuitry with an on-board light source may fit within a typical integrated optics module having a volume as small as about 7.5 in3and/or with a weight of less than about 0.1 lbs. In some embodiments, the wavelength division demultiplexer (see element212inFIG.2) may comprise an arrayed waveguide grating (AWG) as shown in the monitoring system700ofFIG.7.FIG.7illustrates a power transformer770having a number of optical sensors, S1, S2, . . . SN, disposed within, on, or about the power transformer770. Although only one transformer is shown inFIG.7, it will be appreciated that a monitoring system may include multiple transformers which are monitored by multiple sensors. Referring toFIG.7, S1operates in a wavelength band having peak, center, or centroid wavelength λ1, S2operates in a wavelength band having peak, center, or centroid wavelength λ2, and SN operates in a wavelength band having center wavelength λN. Each sensor may be most sensitive to a different parameter, such that S1is most sensitive to parameter1, S2is most sensitive to parameter2, and SN is most sensitive to parameter N. A change in parameter1may shift the wavelength of the light reflected from S1from λ1to (λ1+/−Δ1), a change in parameter2may shift the wavelength of light reflected from S2from λ2to (λ2+/−Δ2), etc. The wavelength shifts caused by changes in the sensed parameters are small compared to the spacing between the characteristic base wavelengths of the individual sensors. Light source710is configured to provide input light to the sensors through circulator715. The light source710has a bandwidth broad enough to provide input light for each of the sensors and over the range of reflected wavelengths expected. The AWG may include N pairs of output waveguides745, wherein each pair of output waveguides745is centered in wavelength around the reflection output of a particular sensor. Light from the light source travels through the circulator and reflects off the sensors as output light. The output light emanating from the sensors is carried on sensor optical waveguide730through circulator715to the AWG740which is used as the optical wavelength domain demultiplexer. When used as an optical demultiplexer, light from the AWG input waveguide741is dispersed via diffraction to output waveguides745depending on the wavelength of the light. For example, an AWG might have a center wavelength of 1550 nm, and 16 output channels with a channel spacing of 100 GHz (0.8 nm at that wavelength). In this scenario, light input at 1549.6 nm will go to channel 8, and light input at 1550.4 nm will go to channel 9, etc. An AWG may include an input waveguide741, a first slab waveguide742, array waveguides743, a second slab waveguide744, and output waveguides745. Each of the array waveguides743is incrementally longer than the next. The input light is broken up in the first slab waveguide742among the array waveguides743. At the output of each array waveguide743, the light has accrued a wavelength-dependent phase shift, which also is incrementally more from one waveguide to the next. The outputs of the array waveguides743resemble an array of coherent sources. Therefore, the propagation direction of the light emitted from the array waveguides743into the second slab waveguide744depends on the incremental phase shift between the sources and hence the wavelength, as in a diffraction grating. In some embodiments, the optical coupler, e.g., AWG, the photodiode array and/or the digitizer may be arranged as a planar lightwave circuit, i.e., integrated optical device. For example, these system components may be made from silicon-on-insulator (SOT) wafers using optical and/or electron beam lithography techniques. The planar lightwave circuit can be coupled to the fiber optic, aligned using V-grooves anisotropically etched into the silicon. Hybrid integration with other semiconductors, for example germanium, is possible to provide photodetection at energies below the bandgap of silicon. In the AWG740, the outputs of the array waveguides743(and hence the input side of the slab waveguide744) may be arranged along an arc with a given radius of curvature such that the light emanating from them travels in the second slab waveguide744and comes to a focus a finite distance away. The inputs of the output waveguides745are nominally disposed at the focal points corresponding to specific wavelengths, although they may be set either in front of or behind the foci to deliberately introduce “crosstalk” between the output waveguides as will be described later. Therefore, light at the input741of the AWG740is passively routed to a given one of the output waveguides745depending on wavelength of the light. Thus, the output light from the S1, S2, . . . , SN is routed to output waveguides745depending on the wavelength of the output light. The output waveguides745are optically coupled to a detector unit750that includes photodetectors, e.g.,2N photodetectors. Due to the wavelength-based spatial dispersion in the AWG, the output light from the sensors S1, S2, . . . SN is spatially distributed across the surface of the detector unit. The photodetectors sense the light from the output waveguides and generate electrical signals that include information about the sensed parameters. FIG.8Aillustrates in more detail the output waveguides of an AWG used as a wavelength domain optical demultiplexer (e.g. element340ofFIG.3) and a detector unit (e.g., element350ofFIG.3) according to some embodiments. In the illustrated configuration2N photodetectors are respectively coupled to receive light from N sensors. The AWG spatially disperses sensor output light having centroid wavelengths λ1, λ2, . . . λNto the output waveguide pairs845a,b,846a,b, . . .847a,b. Sensor output light having centroid wavelength λ1is dispersed to waveguide pairs845a,845b; sensor output light having centroid wavelength λ2is dispersed to waveguide pairs846a,846b; sensor output light having centroid wavelength λNis dispersed to waveguide pairs847a,847b, etc. Light from output waveguide845ais optically coupled to photodetector855awhich generates signal I11in response to the detected light; light from output waveguide845bis optically coupled to photodetector855bwhich generates signal I12in response to the detected light; light from output waveguide846ais optically coupled to photodetector856awhich generates signal I21in response to the detected light; light from output waveguide846bis optically coupled to photodetector856bwhich generates signal I22in response to the detected light; light from output waveguide847ais optically coupled to photodetector857awhich generates signal I21in response to the detected light; light from output waveguide847bis optically coupled to photodetector857bwhich generates signal IN2in response to the detected light. As the centroid of a sensor's output light shifts in response to the sensed parameter, the AWG causes the spatial position of the sensor's output light to also shift. For example if sensor output light that initially has a centroid at λ1shifts to a centroid at λ1+λ1, as shown inFIG.8A, the amount of light carried by output waveguide845adecreases and the amount of light carried by output waveguide845bincreases. Thus, the amount of light detected by photodetector855adecreases and the amount of light detected by photodetector855bincreases with corresponding changes in the photocurrents I1and I2. Thus, a shift in the sensed parameter causes a shift in the sensor output light centroid from λ1to λ1+Δ1which in turn causes a change in the ratio of I11to I12. The photocurrent of each photodiode may be converted into a voltage with a resistor or transimpedance amplifier, and sensed and digitized. The wavelength shift may be calculated for the ithFBG with the following formula: λi≈λi0+Δλ/2I2i−I2i−1/I2i+I2i−1 Here, λiis the estimated wavelength of the ithFBG, λi0is the center wavelength of an output waveguide pair, Δλ is the wavelength spacing between the peak transmission wavelengths of an output waveguide pair, and I2iand I2i−1are the light intensities recorded by the photodetectors at the output of each waveguide in the pair. From the sensed wavelength shift of a given FBG, it is possible to calculate values of sensed parameters, and in turn, to calculate properties of the transformer or other power grid component corresponding to the parameters sensed by the FBG if it is known how those properties tend to vary the observed wavelength shift. In some embodiments, the FBGs have a FWHM roughly equal to Δλ/2, such that as the reflected peak from the FBG shifts from one photodetector in the pair to the other, there is a continuous and monotonic change in the differential signal of the pair (numerator in the formula above). FIG.8Billustrates in more detail another configuration of the output waveguides of an AWG used as a wavelength domain optical demultiplexer (e.g. element212ofFIG.2) and a detection unit (e.g., element215ofFIG.2) according to some embodiments. In this configuration N photodetectors are respectively coupled to receive light from N sensors. The AWG spatially disperses sensor output light having centroid wavelengths λ1, λ2, . . . λNto the output waveguides845,846, . . .847. Sensor output light having centroid wavelength λ1is dispersed to waveguide845; sensor output light having centroid wavelength λ2is dispersed to waveguide846; sensor output light having centroid wavelength λNis dispersed to waveguide847, etc. Light from output waveguide845is optically coupled to photodetector855which generates signal I1in response to the detected light; light from output waveguide846is optically coupled to photodetector856which generates signal I2in response to the detected light; light from output waveguide847is optically coupled to photodetector857which generates signal INin response to the detected light. As the centroid of a sensor's output light shifts in response to the sensed parameter, the AWG causes the spatial position of the sensor's output light to also shift. For example, if sensor output light that initially has a centroid at λ1shifts to a centroid at λ1+Δ1as shown inFIG.8B, the amount of light carried by output waveguide845increases. Thus, the amount of light detected by photodetector855increases with a corresponding change in the photocurrent I1. Thus, a shift in the sensed parameter causes a shift in the sensor output light centroid from λ1to λ1+Δ1, which in turn causes a change in the current I1. Changes in the photodetector current that are caused by fluctuations of light source intensity (e.g.,310inFIG.3) can be differentiated from changes in photodetector current caused by wavelength shifts in sensor output light by measuring the light source intensity with an additional photodetector899that generates current IN+1. Then, a wavelength shift can be calculated from the ratio I1/IN+1for sensor1, I2/IN+1for sensor2, etc. From the sensed wavelength shift of a given sensor, it is possible to calculate a value of sensed parameter, and in turn, to calculate properties of the transformer corresponding to the parameter sensed by the sensor if it is known how those properties tend to vary the observed wavelength shift. FIG.9illustrates in more detail the output waveguides of an AWG used as a wavelength domain optical demultiplexer, an additional dispersive element, and a digitizer according to some embodiments. In this example, the output light from sensors1,2. . . N having initial centroid wavelengths λ2, λ2, . . . λNis respectively spatially dispersed to output waveguides945,946, . . .947of the AWG. The light from output waveguides945,946, . . .947is incident on LVTS965,966, . . .967or other spatially dispersive optical element. Optionally, the LVTS includes spreading components955,956. . .957configured to collimate and/or spread the light from the output waveguide945,946. . .947across an input surface of LVTS965,966, . . .967. In arrangements where sufficient spreading of the light occurs from the output waveguides945,946, . . .947, the spreading components may not be used. The LVTS965,966, . . .967comprises a dispersive element, such as a prism or a linear variable filter. The LVTS965,966, . . .967receives light at its input surface965a,966a, . . .967afrom the waveguide945,946, . . .947and the optional spreading component955,956, . . .957and transmits light from its output surface965b,966b, . . .967bto photodetector pairs975,976, . . .977. At the output surface965b,966b, . . .967bof the LVTS965,966, . . .967, the wavelength of the light varies with distance along the output surface. Thus, the LVTS965,966, . . .967can serve to further demultiplex the optical signal incident at the input surface965a,966a, . . .967aof the LVTS965,966, . . .967according to the wavelength of the light. FIG.9shows two wavelength bands emitted from the LVTS965, an initial emission band has a centroid wavelength of λ1emitted at distance di from a reference position (REF) along the output surface965b. In response to the sensed parameter, the initial wavelength band shifts to a wavelength band having centroid wavelength λ1+Δ1. The shifted wavelength band is emitted at distance dΔ1from the reference position. A photodetector pair975is positioned relative to the LVTS965so that light transmitted through the LVTS965falls on the photodetector pair975. For example, light having wavelength λ1may fall predominantly on photodetector975aand light having wavelength λ1+Δ1may fall predominantly on photodetector975b. The photodetector975agenerates signal I11in response to light falling on its light sensitive surface and photodetector975bgenerates signal I12in response to light falling on its light sensitive surface. The signals I11, I12include information about the sensed parameter such that a change in the ratio of I11and I12indicates a change in the sensed parameter, which can be calculated using the equation discussed above. The high resolution wavelength shift detection schemes discussed above can be extended to monitor tens to thousands of multiplexed sensors while maintaining 50 fm or greater wavelength resolution at an effective sampling rate of 100 Hz. For example, in one embodiment the control circuitry can be configured to monitor eight wavelength multiplexed sensor strings of sixteen sensors with time domain multiplexing, e.g., using an optical switch. In such a configuration 128 sensors can be monitored at 100 Hz. At lower frequencies, up to several thousand sensors can be monitored. FIG.10shows a block diagram of a monitoring system1000that incorporates time domain multiplexing to monitor M transformers wherein each transformer monitor1021,1022, . . .1023includes N sensors. The optical outputs of the N sensors of each transformer monitor1021,1022, . . .1023may be carried on a single optical fiber1031,1032,1033where the optical outputs of the sensors are spatially distributed in wavelength by the optical demultiplexer. The optical fibers and/or sensors may be identically constructed. Input light is passed from the light source1010to the N sensors of each transformer monitor1021,1022, . . .1023through optical time domain multiplexer1070and through waveguides1031,1032, . . .1033. The input excitation light interacts with the sensors S11. . . SNM. Output light from the sensors of the transformer monitors1021,1022, . . .1023is passed to the optical wavelength domain demultiplexer1040through the optical time domain multiplexer1070. The transformer monitors1021(including sensors S11through SN1),1022(including sensors S12through SNM), . . .1023(including sensors S1M through SNM) are selected one at a time by the optical time domain multiplexer1070. Optical signals from the selected monitor are applied to the optical demultiplexer1040, detection unit1050, and analyzer1060during different time intervals. Implementations that combine time domain multiplexing and wavelength domain multiplexing and demultiplexing of sensor output light as disclosed herein are able to monitor a greater number of transformers than could be addressed by either time domain multiplexing or wavelength domain multiplexing/demultiplexing alone. The monitoring system approaches discussed herein can include cybersecurity and interoperability as key built-in functions. For smart grid asset cybersecurity, the vulnerability of the physical, computational, and communications interface layers to deliberate attacks, as well as inadvertent compromises from user errors, equipment failures, and natural disasters are of concern. The disclosed approaches have an inherent advantage over conventional alternatives at least because they are based on optical fiber cables for embedded sensing. The optical fiber cable emerging from the embedded sensing configuration within the transformer is coupled to a modular, dedicated, data-secure communications bus, e.g., using standard optical fiber connectors. The communications bus can transmit the sensed signals directly to a substation control center, e.g., up to 30 km away with 50 fm resolution at 100 Hz. EMI and RFI immunity characteristics make optical fiber communications a desirable long-distance communication bus around substations. Additionally, communications over optical fiber offers shielding and lightning protection functions. According to some embodiments, the control circuitry, e.g., a photonic chip readout with an onboard light source, is located at a substation directly interfacing with the supervisory control and data acquisition (SCADA) and SAS. With its embedded sensing and model-based algorithms, the control circuitry will monitor optical sensor wavelength shifts for transformer health from the substation. Note that as previously discussed, the control circuitry could potentially monitor multiple transformers of interest and/or could monitor multiple redundant optical fiber cables from the same transformer from a central location using time multiplexing strategies. Monitoring multiple redundant optical fibers from the same transformer may be desirable from a security perspective, for example. Monitoring from a central location eliminates the need for a battery or other energy source at the sensing location. The control circuitry can be powered by the same energy source powering the automation systems in the substation control center. Monitoring from a central location also enhances security because the control circuitry can be physically protected from attack out on the field. Having additional multiplexed reference optical sensors monitoring the communication channels for unusual signal anomalies not attributable to transformer parameters can provide an alert to attacks and/or other breaches of security. Systems, devices, or methods disclosed herein may include one or more of the features, structures, methods, or combinations thereof described herein. For example, a device or method may be implemented to include one or more of the features and/or processes described herein. It is intended that such device or method need not include all of the features and/or processes described herein, but may be implemented to include selected features and/or processes that provide useful structures and/or functionality. In the above detailed description, numeric values and ranges are provided for various aspects of the implementations described. These values and ranges are to be treated as examples only, and are not intended to limit the scope of the claims. For example, embodiments described in this disclosure can be practiced throughout the disclosed numerical ranges. In addition, a number of materials are identified as suitable for various implementations. These materials are to be treated as exemplary, and are not intended to limit the scope of the claims. The foregoing description of various embodiments has been presented for the purposes of illustration and description and not limitation. The embodiments disclosed are not intended to be exhaustive or to limit the possible implementations to the embodiments disclosed. Many modifications and variations are possible in light of the above teaching.
68,835
11860243
DESCRIPTION OF EMBODIMENTS A plug status detection solution provided in the embodiments of this application is used to detect a plug status. In one embodiment, the plug status may be a fully-connected state, a half-connected state, or an unconnected state. For example, in a GB standard, when a plug is inserted into a socket and S3 is not closed, the plug is a half-connected state; when the plug is inserted into the socket, and S3 is closed, the plug status is a fully-connected state; and when the plug is not inserted into the socket, the plug status is an unconnected state. In addition, in the GB standard and an EU standard, for the fully-connected state, different charging cable capacities may correspond to different plug states. For example, a fully-connected state and a charging cable current-carrying capacity of 10 A correspond to one plug status, and a fully-connected state and a charging cable current-carrying capacity of 32 A correspond to another plug status. A resistance value of an RC resistor varies with the plug status. To make objectives, technical solutions, and advantages of this application more clearly, the following further describes this application in detail with reference to the accompanying drawings. An embodiment of this application provides a plug status detection circuit. As shown inFIG.4, the plug status detection circuit400includes a wake-up circuit401and a sampling circuit402. The wake-up circuit401is configured to immediately output or delay outputting a wake-up signal based on a resistance value of a connected resistor of the plug status detection circuit400when a plug is connected to a socket, where the wake-up signal is used to trigger a central processing unit (CPU) to drive the sampling circuit402. The sampling circuit402is configured to inject a startup voltage under the drive of the CPU, and after the startup voltage is injected, a level of a detection point of the sampling circuit402is used to indicate a plug status. It can be learned from the description of the background that interface configurations in the GB standard, the EU standard, and the US standard are different, and for a plug status detection circuit disposed on an electric vehicle, different interface configurations correspond to different connected resistors. For example, under the US standard, after the plug is inserted into the socket, a resistance value range of the plug status detection circuit is 100Ω to 1.5 kΩ. When a GB standard S3 is not closed, the resistance value of the connected resistor of the plug status detection circuit is 3.8 kΩ after the plug is inserted into the socket. In this embodiment of this application, resistance value ranges of different connected resistors are distinguished by immediately outputting or delaying outputting the wake-up signal. No matter a GB standard plug, an EU standard plug, or a US standard plug is inserted into the socket, the wake-up circuit401may be triggered to output the wake-up signal, so as to trigger the CPU to drive the sampling circuit402to inject a startup voltage, and then start plug status detection. For example, in three scenarios of the US standard, a closed GB standard S3, and the EU standard, after the plug is inserted into the socket, the resistance value range of the plug status detection circuit400is 100Ω to 1.5 kΩ, and in this case, the wake-up circuit401can immediately output the wake-up signal. For another example, in a scenario of an unclosed GB standard S3, after the plug is inserted into the socket, the resistance value of the connected resistor of the plug status detection circuit400is about 3.8 kΩ, and in this case, the wake-up circuit401may delay outputting the wake-up signal. After receiving the wake-up signal output from the wake-up circuit401, the CPU may drive the sampling circuit402, so that the startup voltage is injected into the sampling circuit402. After the startup voltage is injected, the level of the detection point of the sampling circuit402is used to indicate a plug status. The CPU may collect the level of the detection point, to further determine the plug status. In this embodiment of this application, a particular manner in which the CPU determines the plug status may be as follows: When the plug is not inserted into the socket, the wake-up circuit401does not output the wake-up signal, and the CPU determines that the plug status is an unconnected state when the wake-up signal is not received; after the plug is inserted into the socket, the wake-up circuit401immediately outputs or delays outputting the wake-up signal when the resistance value of the connected resistor is different; and after receiving the wake-up signal, the CPU drives the sampling circuit402to inject a driving voltage, and determines the plug status by detecting a voltage of the detection point. When an S3 switch status is different and a cable current-carrying capacity is different, the voltage of the detection point detected by the CPU is different. Therefore, the CPU can determine the plug status based on the voltage of the detection point. It is easy to learn that, in this embodiment of this application, under the US standard, the EU standard, and the GB standard, as long as the plug is inserted into the socket, the wake-up circuit401outputs (immediately or delays outputting) the wake-up signal. After receiving the wake-up signal, the CPU may drive the sampling circuit402, and determine the plug status by collecting the voltage of the detection point in the sampling circuit402. That is, the plug status detection circuit400provided in this embodiment of this application can be used to detect a plug status under different standards. Compared with a solution provided in the current technology, in this embodiment of this application, a plug status can be detected under different standards by using one plug status detection circuit400, which reduces a quantity of products configured on an electric vehicle side and facilitates normalized design and supply of a product. In this embodiment of this application, the sampling circuit402may include a first switching transistor and a first resistor. A first terminal of the first switching transistor is configured to input the startup voltage, and a second terminal of the first switching transistor is coupled to the CPU; and a first terminal of the first resistor is coupled to a third terminal of the first switching transistor, and a second terminal of the first resistor is coupled to the detection point, as shown inFIG.5. The first switching transistor may be an NPN type triode, an N type metal-oxide-semiconductor (N-metal-oxide-semiconductor, NMOS), or another device having a similar function, and the first switching transistor is switched on when a voltage of the second terminal is a particular value higher than a voltage of the third terminal. The second terminal of the first switching transistor is a control terminal, and may be, for example, a base of the NPN type triode or a gate of the NMOS. The second terminal of the first switching transistor is coupled to the CPU, and the first switching transistor is switched off or on under the control of the CPU. After receiving the wake-up signal, the CPU may send a drive signal to the first switching transistor (for example, apply a 12 V level to the second terminal of the first switching transistor), so that the voltage of the second terminal of the first switching transistor increases to meet a conduction condition of the first switching transistor. After the first switching transistor is switched on, the startup voltage (for example, the startup voltage may be 5 V) is injected into the sampling circuit402. The sampling circuit402provided in this embodiment of this application and a CC detection circuit provided in the current technology (for example, as shown inFIG.1orFIG.2) have similar functions and structures. A difference is that the first switching transistor is disposed in the sampling circuit402. After the plug is inserted into the socket, the wake-up circuit401outputs the wake-up signal, and the CPU drives the sampling circuit402through the first switching transistor after receiving the wake-up signal. In one embodiment, the first switching transistor is disposed in the sampling circuit402, so that the startup voltage is not injected into the sampling circuit402when the plug is not inserted into the socket, that is, the sampling circuit402does not work, to reduce quiescent current and system power consumption. It should be noted that in this embodiment of this application, positions of the first switching transistor and the first resistor are interchangeable (the second terminal of the first switching transistor is still coupled to the CPU), for example, the first terminal of the first resistor is configured to input the startup voltage, and the first terminal and the third terminal of the first switching transistor are respectively coupled to the second terminal of the first resistor and the detection point. The interchange of the positions of the first switching transistor and the first resistor has no substantial effect on implementation of a function of the sampling circuit402. In this embodiment of this application, the wake-up circuit401is further configured to disconnect from the detection point under the control of the CPU after outputting the wake-up signal. After the wake-up circuit401outputs the wake-up signal, a function of the wake-up circuit401is completed. In this case, the wake-up circuit401may be disconnected from the detection point, to reduce the system power consumption. In one embodiment, the wake-up circuit401may include a wake-up function enabling circuit, a delay circuit, a trigger level conversion circuit, and a wake-up signal output circuit. The wake-up function enabling circuit is configured to communicate the detection point with the wake-up circuit401when the plug is connected to the socket; the delay circuit is configured to implement delayed conduction of a switching transistor; the trigger level conversion circuit is configured to trigger the wake-up signal output circuit to output the wake-up signal when the resistance value of the connected resistor of the plug status detection circuit400is less than a first resistance value and the switching transistor in the delay circuit is switched off, and trigger the wake-up signal output circuit to output the wake-up signal when the resistance value of the connected resistor of the plug status detection circuit400is greater than a second resistance value and the switching transistor in the delay circuit is switched on; and the wake-up signal output circuit is configured to output the wake-up signal under the trigger of the trigger level conversion circuit. The delay circuit implements delaying outputting the wake-up signal through the delayed conduction of the switching transistor. The delay circuit may include the switching transistor and a capacitor. Both terminals of the capacitor are bridged to a second terminal and a third terminal of the switching transistor. After the plug is inserted into the socket, the capacitor in the delay circuit starts charging, and the switching transistor is switched off. After the charging is performed for a period of time, a voltage difference between the second terminal and the third terminal of the switching transistor meets a switch-on condition of the switching transistor, the switching transistor is switched on, the charging of the capacitor is completed, and the capacitor is shorted. The switch-off state and the switch-on state of the switching transistor in the delay circuit respectively correspond to two connection states in the plug status detection circuit400, and the two states may match different resistance values of the connected resistor of the plug status detection circuit400. Therefore, when the plug is inserted into the socket, the wake-up circuit401may output the wake-up signal under the US standard, the EU standard, and the GB standard (a different standard corresponds to a different resistance value range of the connected resistor). For example, in the three scenarios of the US standard, the closed GB standard S3, and the EU standard, after the plug is inserted into the socket, the resistance value range of the connected resistor of the plug status detection circuit400is 100Ω to 1.5 kΩ, and when the switching transistor in the delay circuit is switched off (that is, when the capacitor starts charging), a connection status inside the plug status detection circuit400can trigger the wake-up circuit401to output the wake-up signal, that is, the wake-up signal is output immediately after the plug is inserted into the socket. For another example, in the scenario of the unclosed GB standard S3, after the plug is inserted into the socket, the resistance value of the connected resistor of the plug status detection circuit400is about 3.8 kΩ. When the switching transistor in the delay circuit is switched off (that is, when the capacitor is charged), the connection status inside the plug status detection circuit400does not trigger the wake-up circuit401to output the wake-up signal; and only when the switching transistor in the delay circuit is switched on (that is, when the charging of the capacitor is completed), the connection status inside the plug status detection circuit400can trigger the wake-up circuit401to output the wake-up signal, that is, the output of the wake-up signal is delayed after the plug is inserted into the socket. In addition, as described above, after outputting the wake-up signal, the wake-up circuit401may further disconnect from the detection point under the control of the CPU. In one embodiment, this function may be implemented by the wake-up function enabling circuit, and in one embodiment, the wake-up function enabling circuit is further configured to disconnect the detection point from the wake-up circuit401under the control of the CPU after the wake-up circuit401outputs the wake-up signal. Functions of the wake-up function enabling circuit, the delay circuit, the trigger level conversion circuit, and the wake-up signal output circuit are described above. Particular structures of the wake-up function enabling circuit, the delay circuit, the trigger level conversion circuit, and the wake-up signal output circuit are described below. 1. Wake-Up Function Enabling Circuit In this embodiment of this application, the wake-up function enabling circuit includes a second switching transistor, a third switching transistor, and a second resistor. In one embodiment, a first terminal of the second switching transistor is coupled to the delay circuit and the trigger level conversion circuit, a second terminal of the second switching transistor is coupled to a first terminal of the third switching transistor, and a third terminal of the second switching transistor is coupled to the detection point; a second terminal of the third switching transistor is coupled to the CPU, and a third terminal of the third switching transistor is grounded; and a first terminal of the second resistor is coupled to the first terminal of the second switching transistor, and a second terminal of the second resistor is coupled to the first terminal of the third switching transistor, as shown inFIG.6. The second switching transistor and the third switching transistor may be NPN type triodes, NMOSs, or other devices having similar functions. When a voltage of the second terminal is a particular value higher than a voltage of the third terminal, the second switching transistor is switched on, and the third switching transistor is switched on under a same condition. The second terminal of the second switching transistor and the second terminal of the third switching transistor are control terminals, and may be, for example, bases of the NPN type triodes or gates of the NMOSs. The second terminal of the third switching transistor is coupled to the CPU, and the third switching transistor is switched off or on under the control of the CPU. Before receiving the wake-up signal, the CPU controls the third switching transistor to be switched off (that is, sets a low level for driving the third switching transistor), and in this case, the wake-up circuit401works normally; after receiving the wake-up signal, the CPU controls the third switching transistor to switch on (that is, sets a high level for driving the third switching transistor), and in this case, the wake-up circuit401is disconnected with the detection point, and the wake-up circuit401no longer works, that is, as described above, the wake-up function enabling circuit is further configured to disconnect the detection point from the wake-up circuit401under the control of the CPU after the wake-up circuit401outputs the wake-up signal. A working principle of the wake-up function enabling circuit when the third switching transistor switched off (that is, the CPU does not receive the wake-up signal) is analyzed below. To facilitate analysis, the wake-up function enabling circuit and an interface configuration structure are shown in a figure, and a US standard interface is used as an example (in a following figure of this embodiment of this application, the US standard interface is also used as an example, and details are not described later). A connection relationship of the wake-up function enabling circuit with the plug and the socket may be shown inFIG.7. After the plug is inserted into the socket, the detection point is grounded through R5. For the second switching transistor, the voltage of the third terminal is reduced, so that a voltage difference between the second terminal and the third terminal meets a switch-on condition of the second switching transistor. After the second switching transistor is switched on, the wake-up circuit401is communicated with the detection point. 2. Delay Circuit In this embodiment of this application, the delay circuit includes a third resistor, a first capacitor, a fourth switching transistor, and a second capacitor. A first terminal of the third resistor is coupled to the trigger level conversion circuit, and a second terminal of the third resistor is coupled to a first terminal of the first capacitor; a second terminal of the first capacitor is coupled to the first terminal of the second switching transistor; a first terminal of the fourth switching transistor is coupled to the first terminal of the third resistor, a second terminal of the fourth switching transistor is coupled to the second terminal of the third resistor, and a third terminal of the fourth switching transistor is coupled to the second terminal of the first capacitor; and a first terminal of the second capacitor is coupled to the second terminal of the first capacitor, and a second terminal of the second capacitor is coupled to the trigger level conversion circuit. The fourth switching transistor may be an NPN type triode, an NMOS, or another device having a similar function. When a voltage of the second terminal is a particular value higher than a voltage of the third terminal, the fourth switching transistor is switched on. The second terminal of the fourth switching transistor is a control terminal, and may be, for example, a base of the NPN type triode or a gate of the NMOS. Both terminals of the first capacitor are bridged to the second terminal and the third terminal of the fourth switching transistor. As described above, after the plug is inserted into the socket, the first capacitor in the delay circuit starts charging, and the fourth switching transistor is switched off. After the charging is performed for a period of time, a voltage difference between the second terminal and the third terminal of the fourth switching transistor meets a switch-on condition of the fourth switching transistor, the fourth switching transistor is switched on, the charging of the first capacitor is completed, and the first capacitor is shorted. Through delay conduction of the fourth switching transistor, the delay circuit can implement a delayed output of the wake-up signal. Because the delay circuit works in conjunction with the trigger level conversion circuit, a particular function of the delay circuit is subsequently described together with the trigger level conversion circuit. 3. Trigger Level Conversion Circuit In this embodiment of this application, the trigger level conversion circuit includes a fourth resistor, a fifth resistor, a sixth resistor, and a fifth switching transistor. A first terminal of the fourth resistor is configured to receive an input voltage of the wake-up circuit401, and a second terminal of the fourth resistor is coupled to the second terminal of the second capacitor; a first terminal of the fifth resistor is configured to receive an input voltage (for example, the input voltage may be 12 V) of the wake-up circuit401, and a second terminal of the fifth resistor is coupled to the first terminal of the second resistor; a first terminal of the sixth resistor is coupled to the second terminal of the fourth resistor, and a second terminal of the third resistor is grounded; and a first terminal of the fifth switching transistor is coupled to the wake-up signal output circuit, a second terminal of the fifth switching transistor is coupled to the second terminal of the fourth resistor, and a third terminal of the fifth switching transistor is coupled to the first terminal of the second switching transistor. The fifth switching transistor may be an NPN type triode, an NMOS, or another device having a similar function. When a voltage of the second terminal is a particular value higher than a voltage of the third terminal, the fifth switching transistor is switched on. The second terminal of the fifth switching transistor is a control terminal, and may be, for example, a base of the NPN type triode or a gate of the NMOS. A schematic diagram of structures of the delay circuit and the trigger level conversion circuit may be shown inFIG.8. A working principle of the delay circuit and the trigger level conversion circuit is analyzed as follows: After the plug is inserted into the socket, the second switching transistor in the wake-up function enabling circuit is switched on, and the wake-up circuit401is communicated with the detection point. The fourth switching transistor in the delay circuit is an NPN type triode or an NMOS. In this case, the voltage difference between the second terminal and the third terminal of the fourth switching transistor can hardly meet the switch-on condition of the fourth switching transistor, and the fourth switching transistor is switched off. The fourth resistor in the trigger level conversion circuit is connected to the circuit, and the first capacitor is charged through a path of the fifth resistor→the third resistor→the first capacitor→R5. As the charging process of the first capacitor proceeds, the voltage difference between the second terminal and the third terminal of the fourth switching transistor becomes larger, and when the voltage difference meets the switch-on condition of the fourth switching transistor, the fourth switching transistor is switched on, the third resistor and the first capacitor are shorted, and the first capacitor stops charging. In the three scenarios of the US standard, the closed GB standard S3, and the EU standard, the resistance value of the connected resistor of the plug status detection circuit400is less than the first resistance value (for example, the resistance value may be 1.5 kΩ), and when the fourth switching transistor is switched off, the resistance value of the connected resistor can trigger the fifth switching transistor to switch on. After the fifth switching transistor is switched on, the trigger level conversion circuit can drive the wake-up signal output circuit to output the wake-up signal. In the scenario of the unclosed GB standard S3, the resistance value of the connected resistor of the plug status detection circuit400is greater than the second resistance value (for example, the resistance value may be 3.7 kΩ or 3.8 kΩ), and when the fourth switching transistor is switched off, the resistance value of the connected resistor cannot trigger the fifth switching transistor to switch on. As the charging process of the first capacitor proceeds, the voltage of the second terminal of the fourth switching transistor gradually increases until the fourth switching transistor is switched on. After the fourth switching transistor is switched on, the third resistor and the first capacitor are shorted. In this case, a connection relationship inside the plug status detection circuit400changes, a resistance value inside the plug status detection circuit also changes, and the resistance value of the connected resistor of the plug status detection circuit400can trigger the fifth switching transistor to switch on. After the fifth switching transistor is switched on, the trigger level conversion circuit can drive the wake-up signal output circuit to output the wake-up signal. In the trigger level conversion circuit, the fifth switching transistor works in a saturation region, and when the plug is inserted into the socket, the fifth switching transistor is switched off after being momently switched on. It is easy to learn that the wake-up circuit401immediately outputs the wake-up signal after the plug is inserted into the socket in the three scenarios of the US standard, the closed GB standard S3, and the EU standard; and the wake-up circuit401does not immediately output the wake-up signal, but outputs the wake-up signal after a delay of set duration after the plug is inserted into the socket in the scenario of the unclosed national standard S3. The set duration is charging duration of the first capacitor, and may be set by adjusting a capacitance value of the first capacitor, a resistance value of the third resistor, a resistance value of the fifth resistor, and the like. 4. Wake-Up Signal Output Circuit In this embodiment of this application, the wake-up signal output circuit includes a seventh resistor, an eighth resistor, and a sixth switching transistor. A first terminal of the seventh resistor is configured to receive the input voltage of the wake-up circuit401; a first terminal of the eighth resistor is coupled to a second terminal of the seventh resistor, and a second terminal of the eighth resistor is coupled to the first terminal of the fifth switching transistor; and a first terminal of the sixth switching transistor outputs the wake-up signal when the sixth switching transistor is switched on, a second terminal of the sixth switching transistor is coupled to the second terminal of the seventh resistor, and a third terminal of the sixth switching transistor is coupled to the first terminal of the seventh resistor, as shown inFIG.9. The sixth switching transistor is a PNP type triode, a P-metal-oxide-semiconductor (PMOS), or another device having a similar function, and when a voltage difference between the second terminal and the third terminal is less than a particular value, the sixth switching transistor is switched on. The second terminal of the sixth switching transistor is a control terminal, and may be, for example, a base of the PNP type triode or a gate of the PMOS. After the fifth switching transistor in the trigger level conversion circuit is switched on, the second terminal of the sixth switching transistor is grounded through the eighth resistor and R5, so that the voltage difference between the second terminal and the third terminal is reduced, the sixth switching transistor is switched on, and the wake-up signal is output. The wake-up signal may be understood as a conversion from a low level to a high level. In one embodiment, an output of the wake-up circuit401is at a low level before the sixth switching transistor is switched on, and the output of the wake-up circuit401is at a high level after the sixth switching transistor is switched on. When the CPU captures an output jump of the wake-up circuit401, it can be considered that the CPU receives the wake-up signal. In addition, the wake-up signal output circuit may further include a third capacitor, where a first terminal of the third capacitor is coupled to the first terminal of the sixth switching transistor, and a second terminal of the third capacitor is configured to output the wake-up signal. In this case, the wake-up signal may be understood as a pulse signal. In one embodiment, the output of the wake-up circuit401is at a low level before the sixth switching transistor is switched on, and the wake-up circuit401outputs a pulse signal after the sixth switching transistor is switched on. The pulse signal received by the CPU is considered as the wake-up signal. It can be learned from the above analysis that the wake-up circuit401provided in this embodiment of this application can output the wake-up signal after the plug is inserted into the socket under the three standards, namely, the GB standard, the EU standard, and the US standard, so that the CPU drives the sampling circuit402after receiving the wake-up signal, to further perform plug status detection. In other words, by using the plug status detection circuit400provided in this embodiment of this application, a plug status can be detected under different standards by using one detection circuit, thereby reducing a quantity of products configured on an electric vehicle side and facilitating normalized design and supply of a product. In addition, in this embodiment of this application, the startup voltage is not injected into the sampling circuit402when the plug is not inserted into the socket, and the sampling circuit is not in a working state, so that the system power consumption can be reduced, and the plug status detection circuit400can have low quiescent current. Further, after the wake-up circuit401outputs the wake-up signal, the wake-up circuit401is disconnected from the detection point, so that the system power consumption may be further reduced, and the wake-up circuit401may not interfere with detection of the sampling circuit402. For example,FIG.10is a schematic diagram of a possible structure of the plug status detection circuit according to this embodiment of this application. In the plug status detection circuit shown inFIG.10, before the plug is inserted into the socket, the first switching transistor in the sampling circuit402is switched off, and the sampling circuit402does not work; and the second switching transistor in the wake-up circuit401is switched off, and the wake-up circuit401does not work. In this case, the system implements low quiescent current and low power consumption. In the three scenarios of the US standard, the closed GB standard S3, and the EU standard, the first capacitor starts charging after the plug is inserted into the socket. Before the fourth switching transistor is switched on, the resistance value of the connected resistor of the plug status detection circuit can meet a switch-on condition of the fifth switching transistor, the fifth switching transistor is switched on, and the wake-up circuit401outputs the wake-up signal. In the scenario of the unclosed GB standard S3, the first capacitor starts charging after the plug is inserted into the socket. Before the fourth switching transistor is switched on, the resistance value of the connected resistor of the plug status detection circuit cannot meet the switch-on condition of the fifth switching transistor, the fifth switching transistor is not switched on, and the wake-up circuit401does not output the wake-up signal. After the first capacitor is charged for set duration, the fourth switching transistor is switched on, and the third resistor is shorted. In this case, the resistance value of the connected resistor of the plug status detection circuit can meet the switch-on condition of the fifth switching transistor, the fifth switching transistor is switched on, and the wake-up circuit401outputs the wake-up signal. Under any standard, after the wake-up circuit401outputs the wake-up signal to the CPU, the CPU controls the first switching transistor in the sampling circuit402to switch on, the startup voltage is injected into the sampling circuit402, and the CPU starts plug status detection. In addition, the CPU sets a high level for driving the third switching transistor, the third switching transistor is switched on, and the wake-up circuit401is disconnected from the detection point. In addition, an embodiment of this application further provides a plug status detection circuit. Refer toFIG.11. The plug status detection circuit1100includes a first resistor1101, a second resistor1102, and a switching transistor1103. A first terminal of the first resistor1101is configured to inject a startup voltage, and after the startup voltage is injected, a level of a detection point of the plug status detection circuit1100is used to indicate a plug status; a first terminal of the second resistor1102is coupled to a second terminal of the first resistor1101, and a second terminal of the second resistor1102is coupled to the detection point; and a first terminal of the switching transistor1103outputs a wake-up signal when the switching transistor1103is switched on, a second terminal of the switching transistor1103is coupled to the second terminal of the first resistor1101, a third terminal of the switching transistor1103is coupled to the first terminal of the first resistor1101, and the wake-up signal is used to trigger a CPU to collect the level of the detection point. The switching transistor1103is a PNP type triode, a PMOS, or another device having a similar function, and when a voltage difference between the second terminal and the third terminal is less than a particular value, the switching transistor1103is switched on. The second terminal of the switching transistor1103is a control terminal, and may be, for example, a base of the PNP type triode or a gate of the PMOS. In one embodiment, resistance values of the first resistor1101and the second resistor1102meet the following conditions: The switching transistor1103is switched off when a plug is not connected to a socket, and the switching transistor1103is switched on when the plug is connected to the socket. In other words, in this embodiment of this application, by adjusting the resistance values of the first resistor1101and the second resistor1102, a resistance value of a connected resistor of the plug status detection circuit1100can meet a switch-on condition of the switching transistor1103under different standards when the plug is inserted into the socket. After the switching transistor1103is switched on, the CPU receives the wake-up signal, and then determines a plug status by collecting the level of the detection point. In addition, the plug status detection circuit1100may further include a level conversion unit. The level conversion unit is configured to convert an output voltage of a power battery into the startup voltage and inject the startup voltage into the first terminal of the first resistor, as shown inFIG.12. In an electric vehicle, the output voltage of the power battery is a fixed value, for example, 12 V, and this voltage value may not meet a voltage requirement for operation of the plug status detection circuit1100. Therefore, the output voltage of the power battery may be converted by the level conversion unit, so that a converted voltage (for example, 5 V) meets the voltage requirement for the operation of the plug status detection circuit1100. In conclusion, when the plug status detection circuit1100provided in this embodiment of this application is used, the wake-up signal can be output under three standards, namely, a GB standard, an EU standard, and a US standard after the plug is inserted into the socket. After receiving the wake-up signal, the CPU collects a voltage of the detection point, and then determines plug status detection. In other words, by using the plug status detection circuit1100provided in this embodiment of this application, a plug status can be detected under different standards by using one detection circuit, thereby reducing a quantity of products configured on an electric vehicle side and facilitating normalized design and supply of a product. An embodiment of this application further provides a controller. As shown inFIG.13, the controller1300includes a plug status detection circuit1301. The plug status detection circuit may be the plug status detection circuit400or the plug status detection circuit1100. In practical application, the controller1300may be a motor controller or a charging controller depending on configurations in different vehicles. An embodiment of this application further provides a vehicle. Refer toFIG.14. The vehicle includes a CPU1401and the foregoing controller1300. In practical application, the CPU1401may be a vehicle controller or a part of a vehicle controller. It is clearly that a person skilled in the art can make various modifications and variations to this application without departing from the scope of this application. This application is intended to cover these modifications and variations of this application provided that they fall within the scope of the claims of this application and equivalent technologies thereof.
37,531
11860244
The same reference symbol used in various drawings indicates like elements. DETAILED DESCRIPTION FIG.1is a conceptual diagram of magnetometer architecture100that includes magnetic yoke102to divert flux to in-plane magnetic field sensors104a,104b, according to an embodiment. Most thin film magnetic field sensors are sensitive only to external fields (Bx, By) in the XY plane of the thin film. This includes Anisotropic MR (AMR), Giant MR (GMR) and Tunneling MR (TMR) sensors. It is desired that an out-of-plane magnetometer respond to out-of-plane fields Bz (main axis Z sensitivity) and not to in-plane fields (cross-axis XY sensitivity). One solution is to connect different magnetic field sensors with opposite responses in a Wheatstone bridge to cancel any in-plane fields. This solution is difficult to accomplish for both (XY) in-plane fields. Another solution uses magnetic yoke102as a shield for the cross-axis direction and to redirect the flux to in-plane magnetic field sensors104a,104b, as shown inFIG.1. Additionally, it is often desirable to reset a magnetic field sensor quickly after exposure to a large stray external magnetic field. One solution is to use an internal reset coil to generate an internal magnetic field to reset the magnetic field sensor. It is difficult, however, to incorporate both a reset coil and a magnetic yoke in the same design since both components need to be close to the magnetic field sensor for the reset to work properly.FIGS.2and3below illustrate alternative magnetometer architectures that include reset coils for resetting a magnetic yoke and/or magnetic field sensor(s) with low hysteresis. FIG.2is a conceptual diagram of a magnetometer architecture including a yoke with integrated reset coils, according to an embodiment. The magnetometer architecture shown inFIG.2can be applied to both out-of-plane and in-plane sensors (e.g., Hall-effect). In the example shown, magnetometer200includes magnetic yoke202, reset coils204a,204band magnetic field sensor206. Magnetic yoke202includes a first opening208afor receiving reset coil204a. Magnetic yoke202includes a second opening208bfor receiving reset coil204b. Openings208a,208binclude insulator210(e.g., Al2O3, SiO2) to electrically insulate coils204a,204bfrom magnetic yoke202. Also, baked photoresist can be used as an insulator in making coils204a,204band yoke202. In this embodiment, magnetic field sensor206lies in the XY plane (in-plane sensor) and is offset from and runs parallel to magnetic yoke202, as shown inFIG.2. Magnetic field sensor206should be as close as possible to magnetic yoke202to allow magnetic yoke202to “bend” the out-of-plane magnetic field into the plane of magnetic field sensor206. Regarding orientation, in general the long axis of magnetic field sensor206should be parallel to the long axis of magnetic yoke202, such that the easy axis of magnetic field sensor206is parallel to the reset field generated by magnetic yoke202. Reset coils204a,204bcan be single turn or multiple turn coils and each carries a current I in the same direction. Although two integrated reset coils are shown, any number of integrated reset coils with any number of turns can be used. Magnetic yoke202, openings208a,208band magnetic field sensor206can be any desired shape or size depending on the application. There can be any number of magnetic field sensors206. When magnetic field sensor206is measuring an external magnetic field there is no current I in reset coils204a,204b. When yoke202and/or magnetic field sensor206needs to be reset, current I is applied to reset coils204a,204b. Current I generates an internal magnetic field B that magnetizes magnetic yoke202. Simultaneously, the magnetization of magnetic yoke202resets magnetic field sensor206. In an embodiment, magnetic yoke202and/or magnetic field sensor206are reset after exposure to a large stray magnetic field. For example, a processing circuit can activate a current source coupled to reset coils204a,204bto generate current I periodically or in response to a trigger event. FIG.3is a conceptual diagram of a magnetometer architecture that includes a magnetic yoke and reset coils wound on magnetic pole pieces offset from the magnetic yoke, according to an embodiment. The magnetometer architecture shown inFIG.3can be applied to both out-of-plane and in-plane sensors. In the example shown, magnetometer300includes magnetic yoke302, magnetic pole pieces304a,304band magnetic field sensor306. Reset coils308a,308bare wound around magnetic pole pieces304a,304b, respectively. Magnetic pole pieces304a,304bare offset from magnetic yoke302and centered on magnetic sensor306, as shown inFIG.3. Any number of magnetic pole pieces, reset coils and magnetic field sensors can be used in this design. When the magnetic field sensor306is measuring an external magnetic field Bz, there is no current I in reset coils308a,308b. When magnetic field sensor306needs to be reset, current I is applied to reset coils308a,308b. Current I generates an internal magnetic field B that magnetizes pole pieces304a,304band, in turn, also magnetizes magnetic field sensor306. Magnetic pole pieces304a,304bamplify the magnetic fields generated by reset coils308a,308b, respectively, and sensor306acts as a flux guide. In an embodiment, magnetic pole pieces304a,304bare made of a soft magnetic material (e.g., NiFe, CoFe, FeSi, MnZn, NiZn) to minimize remanence. In an alternative embodiment, magnetic pole pieces304a,304bare made of a synthetic antiferromagnet (SAF) to minimize remanence (e.g., NiFe/Ru/NiFe, CoFe/Ru/CoFe, NiFe/CoFe/Ru/CoFe/NiFe). In an embodiment, magnetometer200or300is implemented in a three-axis magnetic field sensor chip package that includes three magnetic field sensors mounted on a substrate, one for each magnetic field axis (X, Y, Z). The magnetic field sensors are wire bonded to processing circuitry. In another embodiment, there are separate chip packages for each magnetic field sensor. In yet another embodiment, there is a single system on chip (SoC) that includes the magnetometers and other sensors and processing circuitry. In an embodiment, the magnetic field sensors can be coupled in a Wheatstone bridge configuration with each sensor arranged to maximize sensitivity and minimize temperature influences. In the presence of an external magnetic field, the resistance values of the magnetic sensors change, causing a bridge imbalance and generating an output voltage proportional to the magnetic field strength. The output voltage can be processed by the processing circuitry to generate raw magnetometer measurement data. The magnetometer sensor chip can be included in a consumer product (e.g., smart phone, tablet computer, wearable device), and the raw magnetometer measurement data can be made available to one or more applications (e.g., navigation applications) running on a host processor of the consumer product. FIG.4Ais a flow diagram of a process of using one or more integrated reset coils coupled to a magnetic yoke to reset the magnetic yoke and one or more magnetic field sensors in a magnetometer, according to an embodiment. Process400can begin by determining whether a magnetic field sensor needs to be reset (402). In accordance with determining that a magnetic field sensor needs to be reset, process400continues by applying current to one or more reset coils coupled to a magnetic yoke to induce an internal magnetic field in the magnetic yoke (404). Process400continues by using the internal magnetic field induced in the magnetic yoke to reset the magnetic yoke and the one or more magnetic field sensors (406). FIG.4Bis a flow diagram of a process of using one or more integrated reset coils coupled to one or more magnetic pole pieces to reset one or more magnetic field sensors in a magnetometer, according to an embodiment. Process408can begin by determining whether a magnetic field sensor needs to be reset (410). In accordance with a determination that one or more magnetic field sensors need to be reset, process408continues by applying current to one or more reset coils coupled to one or more magnetic pole pieces to induce an internal magnetic field in the one or more magnetic pole pieces (412). Process408continues by using the internal magnetic field induced in the one or more magnetic pole pieces to reset the one or more magnetic field sensors (414). FIG.5is a block diagram of an electronic device architecture that includes at least one magnetometer as described in reference toFIGS.2-4, according to an embodiment. Architecture500includes processor(s)501, memory interface502, peripherals interface503, sensors504a. . .504n, display device505(e.g., touch screen, LCD display, LED display), I/O interface506and input devices507(e.g., touch surface/screen, hardware buttons/switches/wheels, virtual or hardware keyboard, mouse). Memory512can include high-speed random access memory and/or non-volatile memory, such as one or more magnetic disk storage devices, one or more optical storage devices and/or flash memory (e.g., NAND, NOR). Memory512stores operating system instructions508, sensor processing instructions509and application instructions510. Operating system instructions508include instructions for implementing an operating system on the device, such as iOS, Darwin, RTXC, LINUX, UNIX, WINDOWS, or an embedded operating system such as VxWorks. Operating system instructions508may include instructions for handling basic system services and for performing hardware dependent tasks. Sensor-processing instructions509perform post-processing on sensor data (e.g., averaging, scaling, formatting, calibrating) and provide control signals to sensors. Application instructions510implement software programs that use data from one or more sensors504a. . .504n, such as navigation, digital pedometer, tracking or map applications, or any other application that needs heading or orientation data. At least one sensor504ais a 3-axis magnetometer200or300as described in reference toFIGS.1-4. For example, in a digital compass application executed on a smartphone, the raw magnetometer output data is provided to processor(s)501through peripheral interface503. Processor(s)501execute sensor-processing instructions509, to perform further processing (e.g., averaging, formatting, scaling) of the raw magnetometer output data. Processor(s)501execute instructions for various applications running on the smartphone. For example, a digital compass uses the magnetometer data to derive heading information to be used by a compass or navigation application. The more accurate the magnetometer data the more accurate the heading calculation for the electronic device. Other applications are also possible (e.g., navigation applications, gaming applications, calibrating other sensors). While this document contains many specific implementation details, these details should not be construed as limitations on the scope of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable sub combination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can, in some cases, be excised from the combination, and the claimed combination may be directed to a sub combination or variation of a sub combination. Logic flows depicted in the figures do not require the particular order shown, or sequential order, to achieve desirable results. In addition, other steps may be provided, or steps may be eliminated, from the described flows, and other components may be added to, or removed from, the described systems. Accordingly, other implementations are within the scope of the following claims.
12,129
11860245
DESCRIPTION OF EMBODIMENTS Embodiments of the invention will be described in detail with reference to the drawings. FIG.1is a circuit diagram of a magnetic detection device according to an embodiment of the invention. In the magnetic detection device10inFIG.1, the circuit elements in the range enclosed by the dashed line A are formed as a semiconductor integrated circuit (IC for magnetic detection) on a single semiconductor substrate (semiconductor chip). The present invention is not limited to this. Elements that constitute the magnetic sensor20may also be formed on a single semiconductor chip along with elements that constitute a detection circuit. For example, a Hall element or magnetoresistive element can be used as the magnetic sensor20. The magnetic detection device10of the embodiment includes: an amplifier circuit (MR amplifier)11that amplifies detection signals of the magnetic sensor20; an on-timer circuit12A that counts a predetermined set time in synchronization with the rising edge of an output signal of the amplifier circuit11; an off-timer circuit12B that counts a predetermined set time in synchronization with the falling edge of the output signal of the amplification circuit11; and an oscillation circuit (OSC)13that generates an operation clock signal CK for the timer circuits12A,12B. The timer circuits12A,12B can be configured with a counter circuit. The magnetic detection device10includes: a logic circuit14to which output signals of the timer circuits12A,12B are input; and an output driver circuit15that receives an output signal of the logic circuit14and generates a signal OUT (magnetic detection output) which is to be output outside the chip. The on-timer circuit12A starts counting time in synchronization with the rising edge of the output signal of the amplifier circuit11. After the on-timer circuit12A counts the predetermined time, the output of the on-timer circuit12A changes to a high level. The on-timer circuit12A is reset by change in a signal to a high level, the signal being obtained by inverting the output signal of the amplifier circuit11with the inverter INV1. The output of the on-timer circuit12A changes to a low level. On the other hand, the signal obtained by inverting the output signal of the amplifier circuit11with the inverter INV1makes the off-timer circuit12B start counting time in synchronization with the falling edge of the output signal of the amplifier circuit11. After the on-timer circuit12B counts the predetermined time, the output of the on-timer circuit12B changes to a high level. The off-timer circuit12B is reset by change in the output signal of the amplifier circuit11to a high level. The output of the on-timer circuit12B changes to a low level. For example, the amplification circuit11includes: an amplifier with an output that varies according to potential of an input; and a comparator that compares the output of the amplifier with a predetermined voltage. The oscillation circuit13can be configured with a ring oscillator or the like. The logic circuit14can be configured with an RS flip-flop. When the output of the on-timer circuit12A changes to a high level, an output of the logic circuit14changes to a high level and keeps its output state. When the output of the off-timer circuit12B changes to a high level, the output of the logic circuit14changes to a low level and keeps its output state. Next, functions of the magnetic detection device10will be explained using timing charts inFIG.2. A conventional detection circuit shown inFIG.4does not have the timer circuits12A,12B and the logic circuit14in the magnetic detection device10shown inFIG.1. In this case, when the magnetic sensor20is placed in an environment with an AC magnetic field as shown inFIG.2(A)and a magnetic field to be detected as shown inFIG.2(B)enters the magnetic sensor20, the output of the magnetic sensor20changes as shown inFIG.2(C). The output of the amplification circuit11changes as shown inFIG.2(D). Then, the signal OUT output from the output driver circuit15becomes a signal containing noise due to the AC magnetic field, as shown inFIG.2(F). In contrast, in the magnetic detection device10of the embodiment, when the output of the amplifier circuit11changes as shown inFIG.2(D), the on-timer circuit12A starts counting time at time points t1, t2, t3of the rising edge of the output signal of the amplifier circuit11. When the output signal of the amplifier circuit11drops before a predetermined time is measured, a reset is applied. As shown inFIG.2(E), noise is removed from the output signal OUT. The output signal of the amplifier circuit11rises at the time point t4due to change in the magnetic field to be detected. At the time point t5at which a predetermined time has been counted since the time point t4, the output signal OUT changes to a high level. After the output signal OUT rises, when the output of the amplifier circuit11changes as shown in FIG. (D), the off-timer circuit12B starts counting time at timing points t6, t7, t8of the falling edge of the output signal of the amplifier circuit11. When the output signal of the amplifier circuit11rises before a predetermined time is counted, a reset is applied. As shown inFIG.2(E), noise is removed from the output signal OUT. The output signal of the amplifier circuit11falls at the time point t9due to change in the magnetic field to be detected. At the time point t10at which a predetermined time has been counted since the time point9, the output signal OUT changes to a low level. Thus, the magnetic detection device10of the embodiment can eliminate noise caused by an AC magnetic field below a predetermined frequency by setting time to be counted by the variable timer circuits12A,12B. The magnetic detection device10can eliminate noise caused by an AC magnetic field without using external components such as capacitors or resistors. For example, the magnetic detection device10can eliminate noise below 50 Hz by setting the time to be counted by the timer circuits12A,12B to 10 ms. Next, the second example of the magnetic detection device10of the embodiment will be described usingFIG.3. As shown inFIG.3, the second example of the magnetic detection device includes: a register16for externally setting a predetermined time to be counted by the timer circuits12A,12B; and an external terminal P1for inputting setting values. According to the configuration described above, the frequency of an AC magnetic field to be removed can be set arbitrarily by changing the value set in the register16. To reduce the number of terminals, the register16should be a shift register that allows serial input of data from the outside. Two registers16may be provided so that the time to be counted by the on-timer circuit12A and the time to be counted by the off-timer circuit12B can be set separately. In the magnetic detection device of the example shown inFIG.3, an inverter INV2that inverts the output of the logic circuit14is provided. A signal inverted by the inverter INV2is supplied to the on-timer circuit12A as an enable signal EN1which permits time-counting of the on-timer circuit12A. The output signal of logic circuit14is supplied to off-timer circuit12B as an enable signal EN2which permits time-counting of the off-timer circuit12B. According to the magnetic detection device with the above configuration, time-counting of the off-timer circuit12B can be stopped while the on-timer circuit12A is counting time. Time-counting of the on-timer circuit12A can be stopped while the off-timer circuit12B is counting time. It brings advantage that power consumption of the circuit is reduced. Embodiments of the invention are described above. The present invention is not limited to those embodiments. For example, in the above embodiment, the amplifier circuit11includes: the amplifier with the output that varies according to potential of the input; and the comparator that compares the output of the amplifier with a predetermined voltage. Alternatively, the amplifier circuit11may consist of an amplifier and a Schmitt trigger circuit, or it may be an amplifier with an output characterized by change between high and low levels. In the above embodiment, an oscillation circuit13that generates clock signals for time-counting of the timer circuits12A,12B is provided in the chip. Alternatively, an external terminal for inputting clock signals from outside may be provided so that the oscillation circuit13can be omitted. INDUSTRIAL APPLICABILITY Application of the invention is not limited to detection of an operating state of an actuator. The invention can be widely used in magnetic detection devices that amplify output signals from magnetic sensors which are placed at locations where AC magnetic fields may appear as noise and which detect the magnetic fields to be monitored, such as a magnetic sensor that detects a position of a motor rotor. REFERENCE SIGNS LIST 10Magnetic detection device11Amplifier circuit12A On-timer circuit (first timer)12B Off-timer circuit (second timer)13Oscillation circuit14Logic circuit15Output driver circuit16Register20Magnetic sensorOUT Magnetic detection output
9,203
11860246
The same reference symbol used in various drawings indicates like elements. DETAILED DESCRIPTION The disclosed short-range position tracking embodiments use a magnetic source (e.g., a permanent magnet, electromagnet) to generate a stationary magnetic field gradient in an environment that is independent of the local ambient magnetic field gradient of the environment. In an embodiment, an alternating current (AC) field is the magnetic source. In an embodiment, a user wears or holds a device (e.g., smartwatch, smart pencil) that includes a sensing array (e.g., array of magnetometers), an angular rate sensor (e.g., a 3-axis MEMS gyro) and a processor. The processor obtains measurements of the stationary magnetic field gradient from the sensing array and stores the measurements in memory of the device. The processor also obtains the rotation angle of the device from a motion sensor. The processor computes the velocity of the device based on current and stored magnetic field gradient measurements and the rotation angle. The processor integrates the velocity to obtain the position of device. Applications such as AR and VR use short-range position tracking (e.g., tracking hand positions) to precisely track the hands of a user while the user interacts with the AR/VR application. The user's hands move through the stationary magnetic field gradient enabling the sensing array to measure the change of the magnetic field gradient as the hands move. The changing magnetic field gradient is used to compute the velocity of the hands, which is integrated by the processor to obtain position. In an embodiment, the magnetic source is mounted on either the head or the chest of the user. In the case where the magnetic source is mounted on the head, the rotation and translation of the head with respect to the hands is removed to provide a stationary magnetic field gradient with respect to the hands. In an embodiment, drift in the position measurement due to the integration of velocity is corrected with data from another sensor (e.g., a video camera, a ultrasonic distance sensor, etc.). In an embodiment, to mitigate drift the velocity is set to zero whenever motion data (e.g., acceleration data) measured by a motion sensor (e.g., an accelerometer) indicates the hand velocity should be zero. In an embodiment, interfering magnetic sources are cancelled out by the sensing array. In an embodiment, sensing arrays with different spatial arrangements are used to provide robust gradient measurements. In an embodiment, the initial position of device with the sensing array is established by the user by placing their hands in a known reset position (e.g., over the ears). The initial position can also be determined from data obtained by another sensor (e.g., a camera, ultrasonic distance sensor). In an embodiment, the user may have multiple reset positions, where the position is reset automatically to the known reset location. FIG.1is a conceptual drawing illustrating a typical, ambient magnetic field gradient of an indoor environment, according to an embodiment. The location of user101can be tracked using the ambient magnetic field gradient caused by Earth's magnetic field, which in this example has a range of about 0.1-.5μT/cm. As shown by the heat map (indicated by shades of gray where the darker the area the higher the magnetic flux density), the typical indoor environment has a non-uniform magnetic field gradient with “dead zones” that have very low magnetic flux density. For example, area102has a higher magnetic flux density than area103. For larger distances (e.g., 1 centimeter (cm) resolution), the ambient magnetic field gradient may be sufficient to track user101. For precise, short-range position tracking, such as tracking the hands of user101, the ambient magnetic field gradient is insufficient. To improve the magnetic field gradient, a magnetic source capable of producing a uniform and stationary magnetic field gradient is introduced into the environment, as described in reference toFIG.2. FIG.2illustrates the use of a stationary magnetic field gradient generated by a magnetic source (e.g., permanent magnet, electromagnet) to track a user's hands in an AR/VR application, according to an embodiment. Note that reference to an AR/VR application is a non-limiting example of an application that could benefit from the short-range position tracking described herein. Other applications could also benefit from the disclosed embodiments. In the example shown, magnetic source201is placed on table202in environment200(an indoor or outdoor environment). User204is using AR/VR hardware205(e.g., VR headset) to interact with a VR application. User204is wearing device203(e.g., a smartwatch) on his left wrist. The left hand of user204moves through the stationary magnetic field gradient generated by magnetic source201, enabling the sensing array in device203to measure the change of the magnetic field gradient as his hand moves. The changing magnetic field gradient is used to compute the velocity of his hand, which is integrated by a processor in device203to obtain the position of his hand, as described in reference toFIG.5. In an alternative embodiment, magnetic source201is mounted on user204(e.g., mounted on his head or chest). In the case where magnetic source201is mounted on his head the rotation and translation of his head with respect to his hand is removed by the processor to provide a stationary magnetic field gradient with respect to his hand. In an embodiment, device203can be worn on both wrists of user204and the positions of both devices can be tracked simultaneously. In an embodiment, the initial position of each hand is established by user204by placing his hands at a known reset position (e.g., over the ears). The initial position of the hands can also be determined from data obtained by another sensor. For example, VR hardware205can include a camera or ultrasonic distance sensor. In an embodiment, user204may have multiple reset positions for his hands, where the positions are reset automatically to known reset locations. In an embodiment, the positions of the hands can be relative to a Cartesian reference coordinate frame fixed to table202or to device203. In an embodiment, a calibration step can be performed where user204moves device203within the stationary magnetic field in a variety of predefined translations and orientations so that device203can generate and store a reference magnetic field gradient. The reference magnetic field gradient can be used during normal operation to detect if the stationary magnetic field has been disturbed by a ferromagnetic object. If a disturbance is detected, the user can be instructed (through a display or audio of the device or a companion device) to remove the ferromagnetic object from the stationary magnetic field. FIG.3Aillustrates a small permanent, stationary magnet field, according to an embodiment. Magnetic source301(e.g., a neodymium magnet) generates a stationary magnetic field gradient which has a range of about 1-1000 μT/cm. Remote bodies300a,300bare also shown.FIG.3Billustrates remote body300ahaving sensing array302with sensors303to measure the stationary magnetic field gradient. In an embodiment, interfering magnetic sources are cancelled out by sensing array302. In an embodiment, sensing array302is configured to have a spatial arrangement of sensors303(e.g., spatial array of magnetometers) to provide robust gradient measurements, as described in reference toFIGS.4A and4B. FIGS.4A and4Billustrate simulated magnetic field gradients, according to an embodiment. As previously disclosed, a small neodymium magnet can generate a uniform and stationary magnetic field gradient up to 1000 μT/cm in the short range (e.g., within 50 cm). The stationary magnetic field gradient is independent of the ambient magnetic field gradient of the local environment. The magnetic field gradient is strong enough in the short range (e.g., sufficiently high signal-to-noise (SNR)) to be measured by sensing array302in remote body300a. FIG.5is a conceptual block diagram of a position estimator500, according to an embodiment. Position estimator500includes sensing array501, velocity calculator502, IMU503and integrator504. Sensing array501includes a spatial array of magnetometers that measure a change in the magnetic field gradient. In an embodiment, velocity calculator502and integrator504are implemented in software or firmware executed by one or more processors, such as a central processing unit (CPU), digital signal processor (DSP) or an embedded processor in an application specific integrated circuit (ASIC). In an embodiment, the velocity V of a remote body is derived from Equation [1]: {dot over (B)}=−Ω×B+R∇2hRTV,[1] where {dot over (B)} is the time derivative of the sensed magnetic field B, Ω is rotation angle vector obtained from IMU503, R is a rotation matrix from an inertial frame to the remote body frame, ∇2h is unknown and measured by a 3-axis magnetometer and V is the velocity of the remote body. A detailed derivation of Equation [1] is found in Vissière, David, Alain Martin, Nicolas Petit. “Using Distributed Magnetometers to Increase IMU-based Velocity Estimation into Perturbed Area.” 2007 46th IEEE Conference on Decision and Control, 2007. Velocity calculator502solves Equation [1] for velocity V and integrator504integrates the velocity V to obtain the position X of the remote body. In an embodiment, the rotation angle vector of the remote body Ω can be determined from a 3-axis MEMs gyro in IMU503by, for example, integrating angular rates output by the 3-axis MEMs gyro. In an embodiment, the 3-axis MEMs gyro can be packaged in a SoC with a 3-axis magnetometer, a 3-axis MEMs accelerometer and other supporting circuitry for measuring magnetic fields, accelerations and Earth's gravity. A simplified version of Equation [1] is shown in Equation [2] where the rotation angle is ignored: ∫∂B∂t=∂B∂x⁢X⁢∫∂x∂t.[2] For a magnetometer with 0.1 μT RMS noise, the minimum magnetic field gradient required to achieve 1 mm resolution is 1μT/cm. The position tracking described above is independent of the local ambient magnetic field gradient, which is advantageous in outdoor environments where the ambient magnetic field gradient is low. The disclosed embodiments also allow for simultaneous tracking of multiple bodies within a certain distance of the magnet. The position of a body is calculated by integrating its velocity rather than performing a double integration of acceleration, which is more sensitive to drift. FIG.6illustrates a magnetic field gradient measured by a sensing array with different distances between sensors, according to an embodiment. Plots601,602and603show the change in magnetic field gradient for distances 5 cm, 2 cm and 0.5 cm, respectively. To capture accurately the fast decaying gradient from the magnetic source, a magnetometer array is used. When the remote body is close to the magnetic source, two magnetometers in the array that are separated by short distance d are used to measure the magnetic field gradient to avoid underestimating the magnetic field gradient. When the remote body is moving away from the magnetic source, magnetometer pairs separated by a larger distance d are used to measure the magnetic field gradient. The noise in the magnetic field gradient measurement can be simplified according to Equation [3]: Noise=.1x⁢2d,[3] where a larger distance d reduces the noise, resulting in the gradient measurement. This reduction in noise increases the SNR at low magnetic field gradient areas. FIG.7is a flow diagram of a position tracking process700using a magnetic field gradient, according to an embodiment. Process700can be implemented by the device architecture800, described in reference toFIG.8. In an embodiment, process700begins by determining a change in a stationary magnetic field over time (701). For example, a sensor array (e.g., an array of magnetometers) in a device (e.g., smartwatch, smart pencil) worn or held by a user can measure the stationary magnetic field generated by a magnetic source (e.g., permanent magnet, electromagnet) over time. Process700continues by determining a change in the magnetic field over distance (702). For example, the sensor array can measure a change in the magnetic field generated by the magnetic source over distance. Process700continues by obtaining rotation angle of the device (703). For example, a MEMS gyroscope embedded in the device can provide the angular rate of the device, which can be integrated to determine the rotation angle. Process700continues by determining velocity based on the change in the magnetic field over time, the change in magnetic field over distance and the rotation angle (704). For example, one or more processors can determine the velocity of the device using Equation [1]. Process700continues by determining the position of the device from the velocity of the device (705). For example, the one or more processors of the device can compute the current position Pcurrof the device by integrating the velocity V of the device according to Equation [4]: Pcurr=(Vcurr−Vprev)×Δt+initial position,  [4] where Vcurris the current velocity, Vprevis previously computed velocity from the previous measurement epoch, and Δt is the elapsed time since the previous measurement epoch. Vprevis computed during the previous position measurement epoch and stored in a buffer in the device. Example Device Architecture FIG.8illustrates a device architecture for implementing the features and process described in reference toFIGS.1-7, according to an embodiment. Architecture800can be implemented in any desired system or product, including but not limited to a smartwatch or smart pencil. Architecture800can include memory interface802, one or more data processors, video processors, co-processors, image processors and/or other processors801, and peripherals interface804. Memory interface802, one or more processors801and/or peripherals interface804can be separate components or can be integrated in one or more integrated circuits. The various components in architecture800can be coupled by one or more communication buses or signal lines. Sensors, devices and subsystems can be coupled to peripherals interface804to facilitate multiple functionalities. In this example architecture800, IMU806and sensing array807are connected to peripherals interface804to provide data that can be used to determine a change in magnetic field gradient as a function of time and distance, as previously described in reference toFIGS.1-7. IMU806can include one or more accelerometers and/or gyros configured to determine change of speed and direction of movement of the device. Communication functions can be facilitated through one or more wireless communication subsystems805, which can include radio frequency (RF) receivers and transmitters (or transceivers) and/or optical (e.g., infrared) receivers and transmitters. The specific design and implementation of the communication subsystem805can depend on the communication network(s) over which a mobile device is intended to operate. For example, architecture800can include communication subsystems805designed to operate over a GSM network, a GPRS network, an EDGE network, a Wi-Fi™ or Wi-Max™ network or a Bluetooth™ network. Memory interface802can be coupled to memory803. Memory803can include high-speed random access memory and/or non-volatile memory, such as one or more magnetic disk storage devices, one or more optical storage devices and/or flash memory (e.g., NAND, NOR). Memory803can store operating system808, such as iOS, Darwin, RTXC, LINUX, UNIX, OS X, WINDOWS, or an embedded operating system such as VxWorks. Operating system808may include instructions for handling basic system services and for performing hardware dependent tasks. In some implementations, operating system808can include a kernel (e.g., UNIX kernel). Memory803stores communication instructions809to facilitate communicating with one or more additional devices, one or more computers and/or one or more servers, such as, for example, instructions for implementing a software stack for wired or wireless communications with other devices. Memory803stores sensor processing instructions810to facilitate sensor-related processing and functions, such as processing output from sensing array807. Memory803stores position tracking instructions811for provide the features and performing the processes described in reference toFIGS.1-7. Memory stores instructions812for one or more applications that use the position tracking described in reference toFIGS.1-7, such as AR or VR applications. Each of the above identified instructions and applications can correspond to a set of instructions for performing one or more functions described above. These instructions need not be implemented as separate software programs, procedures, or modules. Memory803can include additional instructions or fewer instructions. Furthermore, various functions of the mobile device may be implemented in hardware and/or in software, including in one or more signal processing and/or application specific integrated circuits. The described features can be implemented advantageously in one or more computer programs that are executable on a programmable system including at least one programmable processor coupled to receive data and instructions from, and to transmit data and instructions to, a data storage system, at least one input device, and at least one output device. A computer program is a set of instructions that can be used, directly or indirectly, in a computer to perform a certain activity or bring about a certain result. A computer program can be written in any form of programming language (e.g., SWIFT, Objective-C, C #, Java), including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, a browser-based web application, or other unit suitable for use in a computing environment. Suitable processors for the execution of a program of instructions include, by way of example, both general and special purpose microprocessors, and the sole processor or one of multiple processors or cores, of any kind of computer. Generally, a processor will receive instructions and data from a read-only memory or a random-access memory or both. The essential elements of a computer are a processor for executing instructions and one or more memories for storing instructions and data. Generally, a computer will also include, or be operatively coupled to communicate with, one or more mass storage devices for storing data files; such devices include magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and optical disks. Storage devices suitable for tangibly embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, such as EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, ASICs (application-specific integrated circuits). To provide for interaction with a user, the features can be implemented on a computer having a display device such as a CRT (cathode ray tube) or LCD (liquid crystal display) monitor or a retina display device for displaying information to the user. The computer can have a touch surface input device (e.g., a touch screen) or a keyboard and a pointing device such as a mouse or a trackball by which the user can provide input to the computer. The computer can have a voice input device for receiving voice commands from the user. While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any inventions or of what may be claimed, but rather as descriptions of features specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable sub combination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a sub combination or variation of a sub combination. Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
21,558
11860247
DETAILED DESCRIPTION Hereinafter, embodiments of the present disclosure will be described with reference to the drawings. In the following embodiments, the same or equivalent parts are denoted by the same reference signs. First Embodiment The following describes a first embodiment. In the present embodiment, a magnetic sensor provided with a magnetic field generator is described. The magnetic sensor measures, i.e., detects, an external magnetic field based on a high-frequency magnetic field generated by the magnetic field generator. As shown inFIG.1, the magnetic sensor is configured to include a diamond2, a light source3, a temperature control unit4, a measurement unit5, and the like in addition to a magnetic field generator1. As shown inFIGS.1to3, the magnetic field generator1is configured by arranging two loop-shaped, upper layer coil20and lower layer coil30on a substrate10in an overlapping, layered manner. In the drawing, an XY plane is a plane parallel to a surface (e.g., an upper surface) of the substrate10, and a normal direction with respect to the XY plane is a direction parallel to the Z axis. Further, the magnetic field generator1is provided with an upper layer power source40which is a mechanism for energizing the upper layer coil20and a lower layer power source50which is a mechanism for energizing the lower layer coil30. The substrate10supports the upper layer coil20and the lower layer coil30. For example, the substrate10is made of an epoxy-based resin material or the like, and has a structure including the upper layer coil20and the lower layer coil30inside. A dielectric composed of a part of the substrate10is sandwiched between the upper layer coil20and the lower layer coil30. Here, the substrate10is provided as a multi-layer substrate, which has the upper layer coil20and the lower layer coil30built therein, by layering and combining a plurality of printed circuit boards. For example, a plurality of printed circuit boards respectively having front and back surfaces covered with a metal foil such as copper foil are prepared, some of which are patterned by etching to form the upper layer coil20, the lower layer coil30and the like. Then, the printed circuit boards after patterning are combined/integrated by press processing or the like to form the substrate10which has the upper layer coil20and the lower layer coil30built therein. Further, as shown inFIGS.1to3, the substrate10is formed with a through hole11piercing through an inside of the upper layer coil20and the lower layer coil30. The through hole11may be formed so as to penetrate the front and back surfaces of the substrate10, which in this case has a cylindrical shape. The diamond2and the temperature control unit4, which is described later, are arranged in the through hole11. Note that, although omitted inFIGS.1and3, the upper layer coil20and the lower layer coil30are, as shown inFIG.2, sandwiched between the front and back surfaces of the substrate10and an upper GND layer12and a lower GND layer13having a ground potential are symmetrically arranged (hereinafter “ground” may be designated as GND). In such manner, a microstrip line is configured by arranging the upper GND layer12and the lower GND layer13vertically symmetrically on the substrate10. The upper GND layer12and the lower GND layer13are formed to cover at least a coil portion21of the upper layer coil20and a coil portion31of the lower layer coil30. Then, the upper GND layer12is partially removed, for example, at a position outside the coil portion21and the coil portion31, and, via the removed portion, an electrical connection between the upper layer power source40and the upper layer coil20and an electrical connection between the lower layer power source50and the lower layer coil30are respectively enabled. Further, the through hole11is formed to penetrate the upper GND layer12and the lower GND layer13. The upper layer coil20has the coil portion21, a slit22that partially cuts out the coil portion21, and a lead portion23that is arranged on both sides of the slit22and is drawn out in an outer peripheral direction of the coil portion21. The coil portion21and the lead portion23are made of a first conductive material, such as copper or the like as described above, for example. The coil portion21constitutes a loop circuit composed of a loop-shaped coil. Specifically, the coil portion21has an annular shape having a predetermined width, and the length per loop, that is, an electric length of one loop, is set to one wavelength of the high-frequency current flowing from the upper layer power source40. That is, a distributed constant circuit is configured such that one wavelength of the high-frequency current and the electric length are set to be about the same. When a high-frequency current near 2.87 GHz is used, one wavelength is approximately 100 mm. Therefore, the radius of the coil portion21is approximately 16 mm. However, since a wavelength shortening rate changes according to the material around the coil, that is, the dielectric constant of the substrate10, the electric length per loop of the coil portion21can be set according to the wavelength shortening rate. For example, when FR4 composed of glass epoxy is used as an epoxy-based resin material, the dielectric constant is about 4, which makes the radius about 8 mm due to the wavelength shortening rate, and the length of one loop of the coil portion21is set to be approximately 50 mm. Generally, the relationship between the physical length and the electric length of the wiring of the loop circuit when the dielectric constant of the substrate is same is shown inFIG.4A. InFIG.4A, L means a reference length, and 1 L, 2 L, 5 L, and 10 L mean a length obtained by multiplying the reference length by a numerical magnification. Further, the relationship between the dielectric constant and the electric length of the substrate when the physical length of the wiring of the loop circuit is same is shown inFIG.4B. InFIG.4B, εr means a relative permittivity, εr: 1, εr: 2, εr: 5, εr: 10, and εr: 20 mean numerical values of the relative permittivity. As shown in these drawings, the electric length in the loop is proportional to the physical length, and the higher the dielectric constant, the longer the electric length. Therefore, in the present embodiment, the electric length per loop of the coil portion21composed of the loop coil is set based on the physical length of the upper layer coil20corresponding to the wiring and the relative permittivity of the substrate10. The length of one loop of the coil portion21and one wavelength of the high-frequency current do not have to completely match. That is, if the XY plane can be oriented in the magnetic field direction as described later in generating the high-frequency magnetic field, the length of one loop of the coil portion21and one wavelength of the high-frequency current may be different from each other. For example, a magnetic field is generated on the XY plane even if a deviation of ±20% occurs, but it may preferably be within ±10%. Further, the through hole11described above has a dimension corresponding to the dimension of the coil portion21, and, if the radius of the coil portion21is approximately 16 mm, the radius is set to be less than that. The slit22is a gap provided between one end and the other end of the coil portion21, which may be several of one tenth mm to several mm, for example, and the length of one loop of the coil portion21excluding such gap is set to be the wavelength of a high-frequency current. The lead portion23has a first lead portion23aand a second lead portion23bdrawn from one end of the coil portion21, and the first lead portion23ais connected to the upper layer power source40, and the second lead portion23bis connected to the GND. As a result, a path of electric current is formed in which the electric current flowing from the upper layer power source40flows from the first lead portion23ato the second lead portion23bthrough the coil portion21. Further, in order to suppress a reflection of the electric current flowing from the second lead portion23bto the GND, a resistor60is provided at a position between the second lead portion23band the GND. The lower layer coil30has a shape corresponding to the upper layer coil20. The lower layer coil30also has the coil portion31, a slit32that partially cuts out the coil portion31, and a lead portion33that is arranged on both sides of the slit32and is drawn out/extends in the outer peripheral direction of the coil portion31. The coil portion31and the lead portion33are made of a second conductive material, such as copper or the like as described above, for example. The coil portion31constitutes a loop circuit composed of a loop-shaped coil. Specifically, the coil portion31is formed in the same shape and dimensions as the coil portion21of the upper layer coil20, and is arranged to face the coil portion21at a predetermined distance. The slit32also has the same dimensions as the slit22of the upper layer coil20. In the present embodiment, the slit32is formed at the same position as the slit22. The lead portion33has a first lead portion33aand a second lead portion33bdrawn from one end of the coil portion31, and the first lead portion33ais connected to the lower layer power source50, and the second lead portion33bis connected to the GND. In such manner, a path of electric current is formed in which the electric current flowing from the lower layer power source50flows from the first lead portion33ato the second lead portion33bvia the coil portion31. Further, in order to suppress a reflection of the electric current flowing from the second lead portion33bto the GND, a resistor70is provided at a position between the second lead portion33band the GND. The center of the coil portion21of the upper layer coil20and the center of the coil portion31of the lower layer coil30are aligned/matched, and their central axes are the Z axis. The central axis may also be called as the coil central axis. Further, between the upper layer coil20and the lower layer coil30, one surface parallel to the coil portion21and the coil portion31is the XY plane. The upper layer power source40is a high-frequency power source that supplies a high-frequency current to the upper layer coil20. The upper layer power source40generates a high-frequency current in which one wavelength is the length of one loop of the coil portion21. The lower layer power source50is a high-frequency power source that supplies a high-frequency current to the lower layer coil30. The lower layer power source50generates a high-frequency current in which one wavelength is the length of one loop of the coil portion31. Here, a high-frequency current of about 2.87 GHz is passed from the upper layer power source40and from the lower layer power source50. The magnetic field generator1is configured in the above-described manner. Although the details of the magnetic field generator1configured in such manner are described later, a high-frequency magnetic field is generated in the XY plane which is positioned between the upper layer coil20and the lower layer coil30. The diamond2corresponds to a magnetic field measuring element that measures an external magnetic field, and is arranged in the through hole11. Here, the diamond2is arranged to be positioned in the XY plane that generates a high-frequency magnetic field at a position between the upper layer coil20and the lower layer coil30. When the diamond2is irradiated with light having a specific wavelength and when a high-frequency magnetic field is applied thereto, the diamond2undergoes wavelength conversion to generate fluorescence. The light source3irradiates the diamond2with, for example, a laser beam as light having a specific wavelength. The light source3is arranged outside of the substrate10, that is, outside of the upper layer coil20and the lower layer coil in the radial direction, and irradiates the diamond2with light through a space between the upper layer coil20and the lower layer coil. Here, for example, the light source3is arranged so that the laser light is irradiated along the XY plane. However, the light source3may also be arranged so that the laser light is irradiated obliquely with respect to the XY plane. For example, a green laser beam is output from the light source3, and the wavelength is converted by the diamond2to generate red fluorescence. The temperature control unit4is used to adjust temperature of the diamond2. The temperature control unit4is arranged to be in contact with the diamond2. The diamond2generates fluorescence by converting the wavelength of the irradiated light, and at such time, energy loss occurs and heat is generated. The temperature control unit4adjusts temperature of the diamond2at the time of heat generation by cooling the diamond2or by other method. The measurement unit5is for measuring the light emitted by the diamond2, and is composed of a light receiving element or the like. As described above, when the diamond2fluoresces, fluorescence is output in various directions. Therefore, by arranging the measurement unit5outside the through hole11, the measurement unit5is enabled to measure the light emission of the diamond2. Then, the measurement unit5measures the light emitted by the diamond2to observe physical phenomena such as a shape of the diamond2and the like. Since the diamond2absorbs the energy due to the unpaired electrons of the measurement target based on ESR (Electron Spin Resonance) and changes its characteristics, measurement of the generated minute magnetic field by the measurement target becomes measurable by/via the measurement of physical phenomena. The magnetic sensor including the magnetic field generator1according to the present embodiment is configured in the above-described manner. As described above, the magnetic field generator1according to the present embodiment generates a high-frequency magnetic field, and the diamond2can be used as a measuring element to measure the external magnetic field. At such time, the magnetic field generator1is configured to generate a high-frequency magnetic field which has the magnetic field direction aligned in the XY plane, and in addition since the substrate10is thin, the minute amount of the magnetic field generated by the measurement target becomes measurable on the front and back surfaces above and below the substrate10. Further, since the through hole11formed by hollowing out the substrate10, the measurement target that is the source of the minute magnetic field can be brought closer to the diamond2or to the high-frequency magnetic field, thereby the minute magnetic field can be more accurately measured. Here, the mechanism by which the magnetic field direction of the high-frequency magnetic field can be set to the XY plane as described above is described in comparison to the conventional structure. As described above, the magnetic field generator1of the present embodiment has the upper layer coil20and the lower layer coil30arranged in an overlapping manner, which respectively receive supply of the high-frequency current from the upper layer power source40and the lower layer power source50. Further, the lead portion23of the upper layer coil20and the lead portion33of the lower layer coil30are arranged to have the same position when viewed from the normal direction of the substrate10. In such a configuration, a high-frequency current having a phase difference of 180° is applied to the upper layer coil20and the lower layer coil30. Then, the frequency of the high-frequency current is set to around 2.87 GHz so that one wavelength of the high-frequency current becomes substantially equal to the length of one loop of the coil portion21of the upper layer coil20and the coil portion31of the lower layer coil30. In the following description, the phase of the high-frequency current at an end (i.e., the lead portion) of the coil portion21of the upper layer coil20and the coil portion31of the lower layer coil30where the high-frequency current is input at a supply start timing of the high-frequency current is referred to as an initial phase. When such a high-frequency current is passed, for example, in the lower layer coil30, as shown inFIGS.5A and5B, a high-frequency current is passed (i.e., flows) from the first lead portion33ato the second lead portion33b. In such case, at a point P1, which is a position of the first lead portion33a, of 0° and a point P2, which is a position of the second lead portion33b, of 360°, the polarity of the electric current is reversed from each other at point symmetric position about the coil central axis. For example, assuming that the waveform of the high-frequency current flowing at each position from 0° to 360° at an arbitrary timing is as shown inFIG.7, the phases are reversed at points P3and P4, and the directions of the electric currents become opposite to each other. Therefore, for example, at the point P3at a 90° position and the point P4at a 270° position inFIG.5A, counter-clockwise magnetic fields E1and E2are generated based on the right-handed screw rule when seen from the first lead portion33aand the second lead portion33b. On the other hand, since a high-frequency current having a phase difference of 180° from that of the lower layer coil30is passed to the upper layer coil20, a magnetic field opposite to that of the lower layer coil30is generated in the upper layer coil20. Therefore, for example, when the magnetic fields at the positions of the points P3and P4are shown, the upper layer coil20has clockwise magnetic fields E3and E4generated therein, and the lower layer coil30has a counter-clockwise magnetic fields E1and E2generated therein, respectively as shown inFIGS.5B and6. Therefore, directions of the magnetic fields E1to E4match (i.e., are aligned) with each other at/around the positions of the points P3and P4in the substrate10, or in other words, at positions in between the upper layer coil20and the lower layer coil30, i.e., (A) the magnetic fields E1and E3have the same direction at a lower part of an upper layer portion having the upper layer coil20(i.e., at a position close to the lower layer coil30) and an upper part of a lower layer portion having the lower layer coil30(i.e., at a position close to the upper layer coil20) and (B) the magnetic fields E2and E4have the same direction at a lower part of an upper layer portion having the upper layer coil20(i.e., at a position close to the lower layer coil30) and an upper part of a lower layer portion having the lower layer coil30(i.e., at a position close to the upper layer coil20). In such manner, a high-frequency magnetic field H is generated between the upper layer coil20and the lower layer coil30with a direction from the point P4to the point P3as the magnetic field direction, as shown by a white arrow inFIG.5B. Since the electric current flowing in the upper layer coil20and the lower layer coil30is a high-frequency current, the position where the current amplitude takes the maximum value and the position where the current amplitude takes the minimum value respectively change, thereby a high-frequency magnetic field having the magnetic field direction changed accordingly on the XY plane is generated. As a comparative example shown inFIG.8, consider a case where a direct current is passed in a structure in which a coil J20wound by a plurality of times is provided in a substrate J10. In case of such a configuration, the electric current is reversed at a position symmetrical with respect to the coil central axis. Therefore, as shown in the figure (FIG.8), a counter-clockwise magnetic field EJ1is generated at the position on the left side of the drawing where the electric current in the coil20flows in a direction to come up from behind the paper surface toward the reader in each of the plural windings of the coil J20. Further, a clockwise magnetic field EJ2is generated at the position on the right side of the drawing where the electric current flows in a direction to sink into the paper surface away from the reader. Therefore, a high-frequency magnetic field HJ in the coil central axis direction is generated in the coil J20. In such a case, the measurement target needs to be arranged on the lateral side of the substrate J10, that is, on an outer side in the radial direction of the coil J20, which may pose a limitation regarding how close the measurement target is positionable relative to the coil J20due to the structure described above. Further, if the measurement target is placed directly above or below the substrate J10, that is, in the axial direction of the coil J20, though the distance to the measurement target becomes shorter, the external magnetic field is measurable because the magnetic sensor has no sensitivity in the axial direction. Therefore, it can be said that it is effective to use the magnetic field generator1capable of making the XY plane in the magnetic field direction as in the present embodiment. As described above, in the magnetic field generator1of the present embodiment, the upper layer coil20and the lower layer coil30are arranged to be capable of supplying high-frequency electric current from the upper layer power source40and the lower layer power source50, respectively. Further, the lead portion23of the upper layer coil20and the lead portion33of the lower layer coil30are arranged to have the same position when viewed from the normal direction of the substrate10. In such a configuration, a high-frequency current having a phase difference of 180° is passed through the upper layer coil20and the lower layer coil30. The length of one wavelength of the high-frequency current is set to be substantially equal to the length of one loop of the coil portion21of the upper layer coil20and the coil portion31of the lower layer coil30. In such a configuration, the direction of the magnetic field generated on a lower layer coil30side of the upper layer portion of the substrate10which has the upper layer coil20provided therein and the direction of the magnetic field generated on an upper layer coil20side of the lower layer portion of the substrate10which has the lower layer coil30provided therein are matched. This makes it possible to generate a high-frequency magnetic field having the XY plane as the magnetic field direction at a position between the upper layer coil20and the lower layer coil30. Thus, the magnetic sensor provided with such a magnetic field generator1is configured to have sensitivity in the axial direction of the upper layer coil20and the lower layer coil30, and the measurement target generating a very small magnetic field is brought close either to an immediate/directly above or to an immediate/directly below the substrate10. Therefore, such a magnetic sensor is made more accurate. Further, the magnetic field generator1of the present embodiment separately includes the upper layer power source40that supplies a high-frequency current to the upper layer coil20and the lower layer power source50that supplies a high-frequency current to the lower layer coil30. Therefore, high-frequency currents having opposite phases are suppliable from the upper layer power source40and the lower layer power source50to the upper layer coil20and the lower layer coil30, respectively. Specifically, with respect to the magnetic field generator1of the present embodiment, the flow of electric current in the upper layer coil20and the lower layer coil30and the generated high-frequency magnetic field were investigated by simulation. As a result, diagrams shown inFIGS.9A to9C and10were obtained. When high-frequency currents are passed through the upper layer coil20and the lower layer coil30and high-frequency currents have opposite phases with a 180° phase difference, the directions of the electric current at various parts at an arbitrary timing is shown inFIGS.9A to9Cby arrows. That is, in the upper layer coil20, the directions of the electric current are opposite to each other at point-symmetric positions with respect to the coil central axis. Similarly, in the lower layer coil30, the directions of the electric current are opposite to each other at point-symmetric positions with respect to the coil central axis. Further, at the same angle position with respect to the coil central axis, the electric currents in the upper layer coil20and the lower layer coil30flow in opposite directions. Then, as shown inFIG.9C, in the upper layer coil20, an electric current is generated from an arbitrary position on one side opposite to the lead portion23with respect to the coil central axis, and in the lower layer coil30, the electric current flows into any/arbitrary position on the opposite side of the lead portion23with respect to the coil central axis. Therefore, in a cross section diagram shown inFIG.10, the upper layer coil20generates a clockwise magnetic field, and the lower layer coil30generates a counter-clockwise magnetic field. Therefore, as shown inFIG.10, at the position of the diamond2, a high-frequency magnetic field pointing in a left direction of the paper surface is generatable, which shows that a high-frequency magnetic field is generatable in the XY plane. Second Embodiment The second embodiment is described. In the present embodiment, the configurations of the upper layer coil20and the lower layer coil30are changed from those in the first embodiment, and the other parts are the same as those in the first embodiment. Therefore, the description focuses on such difference. As shown inFIGS.11and12, in the present embodiment, the upper layer coil20and the lower layer coil30provided in the magnetic field generator1each have a double-layer structure. That is, the upper layer coil20is composed of a first coil210and a second coil220, and the lower layer coil30is composed of a third coil310and a fourth coil320. The first coil210is configured to have a coil portion211, a slit212, and a lead portion213. The lead portion213including the coil portion211, the slit212, a first lead portion213aand a second lead portion213bhas the same configuration as the coil portion21, the slit22and the lead portion23described in the first embodiment. Further, the second coil220is configured to have a coil portion221and a slit222and a lead portion223. The lead portion223having the coil portion221and the slit222, a first lead portion223aand a second lead portion223bhas the same configuration as the coil portion21, the slit22and the lead portion23described in the first embodiment. However, here, the position where the slit212and the lead portion213of the first coil210are provided is different from the position where the slit222and the lead portion223of the second coil220are provided, and the position is shifted by 180° with respect to the coil central axis among the two. Further, the upper layer power source40includes a first upper layer power source41and a second upper layer power source42. The first upper layer power source41is connected to the first lead portion213ato energize the first coil210, and the second upper layer power source42is connected to the first lead portion223ato energize the second coil220. Further, a resistor61connecting the second lead portion213bto the GND and for reflection suppression is provided, and a resistor62connecting the second lead portion223bto the GND and for reflection suppression is provided. The third coil310is configured to have a coil portion311, a slit312, and a lead portion313having a first lead portion313aand a second lead portion313b. The coil portion311, the slit312, and the lead portion313have the same configuration as the coil portion31, the slit32, and the lead portion33described in the first embodiment. Further, the fourth coil320is configured to have a coil portion321, a slit322, and a lead portion323having a first lead portion323aand a second lead portion323b. The coil portion321, the slit322and the lead portion323have the same configuration as the coil portion31, the slit32and the lead portion33described in the first embodiment. However, here, the position where the slit312and the lead portion313of the third coil310are provided is different from the position where the slit322and the lead portion323of the fourth coil320are provided, and the position is shifted by 180° with respect to the coil central axis among the two. Further, the lower layer power source50includes a first lower layer power source51and a second lower layer power source52. The first lower layer power source51is connected to the first lead portion313ato energize the third coil310, and the second lower layer power source52is connected to the first lead portion323ato energize the fourth coil320. Further, a resistor71connecting the second lead portion313bto the GND and for reflection suppression is provided, and a resistor72connecting the second lead portion323bto the GND and for reflection suppression is provided. In such a configuration, a high-frequency current is passed through the first coil210and the second coil220constituting the upper layer coil20so that the electric currents at the same angle with respect to the coil central axis are in phase. That is, with respect to the first coil210and the second coil220, since the positions of the lead portion213and the lead portion223are shifted by 180°, the phase of the high-frequency current to be passed is also shifted by 180°. Further, a high-frequency current having the same phase at the same angle with respect to the coil central axis is also passed through the third coil310and the fourth coil320constituting the lower layer coil30. However, for the third coil310and the fourth coil320, high-frequency currents having a 180° phase difference from the first coil210and the second coil220are provided. That is, since the positions of the lead portion313and the lead portion323of the third coil310and the fourth coil320are also shifted by 180°, the phase of the high-frequency current to be passed is also shifted by 180°. Further, regarding the third coil310, since the lead portion313is arranged at the same angle as the lead portion213of the first coil210, the high-frequency current is 180° out of phase with respect to the first coil210. Similarly, with respect to the fourth coil320, since the lead portion323is arranged at the same angle as the lead portion223of the second coil220, the high-frequency current is 180° out of phase with respect to the second coil220. In such manner, as shown inFIG.12, the magnetic fields E3and E4in the same direction can be generated at the same angle with respect to the coil central axis in the first coil210and the second coil220. Further, the magnetic fields E1and E2in opposite directions can be generated in the third coil310and the fourth coil320at the same angle as the magnetic fields E3and E4of the first coil210and the second coil220with respect to the coil central axis. Therefore, even if the upper layer coil20and the lower layer coil30are composed of two layers, a high-frequency magnetic field H having the XY plane as the magnetic field direction can be generated between the upper layer coil20and the lower layer coil30. If the upper layer coil20and the lower layer coil30are composed of two layers in such manner, the intensity of the magnetic field generated by the upper layer coil20and the lower layer coil30can be increased, and a stronger high-frequency magnetic field is generatable. Third Embodiment The third embodiment is described. The present embodiment is a modification of the layout of the upper layer coil20and the lower layer coil30in the first embodiment, and has the same configuration as the first embodiment for the other part. Thus, the description focuses on difference therefrom. As shown inFIGS.13and14, in the present embodiment, the formation positions of the slit22and the lead portion23of the upper layer coil20and the formation positions of the slit32and the lead portion33of the lower layer coil30are different. Here, the formation positions of the slit22and the lead portion23of the upper layer coil20and the formation positions of the slit32and the lead portion33of the lower layer coil30are shifted by 90° with respect to the coil central axis. Specifically, assuming that the position of the first lead portion23ain the upper layer coil20is 0° and the position of the second lead portion23bis 360°, the slit32and the lead portion33in the lower layer coil30are arranged at 270°. In such a configuration, the initial phase of the high-frequency current flowing through the upper layer coil20is set to 90°, and the initial phase of the high-frequency current flowing through the lower layer coil30is set to 0°. In such manner, the phase of the high-frequency current can be shifted by 180° at the same angle with respect to the coil central axis among the upper layer coil20and the lower layer coil30. Therefore, even if the formation positions of the slit22and the lead portion23of the upper layer coil20and the formation positions of the slit32and the lead portion33of the lower layer coil30are different angles, i.e., without having the same angle, with respect to the coil central axis, the same effect as the first embodiment is obtainable. Here, the formation positions of the slit22and the lead portion23of the upper layer coil20and the formation positions of the slit32and the lead portion33of the lower layer coil30are shifted by 90° with respect to the coil central axis. However, the shift angle may be other than 90°, of course. Fourth Embodiment The fourth embodiment is described. In the present embodiment, the shapes of the upper layer coil20and the lower layer coil30are changed with respect to the first to third embodiments, and the other parts are the same as those in the first to third embodiments. Thus, only the different parts from the first to third embodiments are described. As shown inFIGS.15and16, in the present embodiment, the coil portion21of the upper layer coil20and the coil portion31of the lower layer coil30are not annular but square. Specifically, the coil portion21is formed in a rectangular shape composed of two opposing short sides and two opposing long sides, and the slit22and the lead portion23are arranged on one of the short sides. Similarly, the coil portion31is formed in a rectangular shape composed of two opposing short sides and two opposing long sides, and the slit32and the lead portion33are arranged on one of the short sides. The coil portion21and the coil portion31are arranged to face each other so that their short sides overlap each other and their long sides overlap each other (in a plan view). The slit22, the lead portion23, the slit32, and the lead portion33may be arranged at the same angle with respect to the coil central axis as in the first embodiment. However, in the present embodiment, they are arranged at positions shifted by 180°. In such an arrangement, a high-frequency current having an initial phase of 0° may be passed through the upper layer coil20and the lower layer coil30. In such manner, even when the coil portion21and the coil portion31have a quadrangular shape, the same effect as the first embodiment is achievable if the length per loop is set to one wavelength of the high-frequency current flowing through them. Further, when the coil portion21and the coil portion31have a rectangular shape, by adjusting an aspect ratio, which is a ratio of a vertical dimension to a horizontal dimension of the rectangular shape on the XY plane, the magnetic field direction is controllable. When the coil portion21and the coil portion31have a rectangular shape, the aspect ratio corresponds to a ratio of the long side to the short side. In such a configuration, a high-frequency magnetic field can be generated weakly in the direction of an arrow E along the long side and strongly in the direction of an arrow F along the short side. Therefore, the direction of the magnetic field can be substantially controlled in the direction of the arrow F. Fifth Embodiment The fifth embodiment is described. The present embodiment is the same as the first to fourth embodiments in that the form of the high-frequency current input to the upper layer coil20and the lower layer coil30is changed from the first to fourth embodiments. Therefore, only the parts different from the first to fourth embodiments in the present embodiment are described. As shown inFIG.17, in the magnetic field generator1of the present embodiment, a phase adjuster80that adjusts the phase of the high-frequency current flowing through the upper layer coil20and a phase adjuster90that adjusts the phase of the high-frequency current flowing through the lower layer coil30are provided. Then, the high-frequency current output from the upper layer power source40is phase-adjusted by the phase adjuster80, and a high-frequency current having the same phase is passed through both the first lead portion23aand the second lead portion23bto both ends of the coil portion21. Similarly, the high-frequency current output from the lower layer power source50is phase-adjusted by the phase adjuster90, and a high-frequency current having the same phase is passed through both the first lead portion33aand the second lead portion33bto both ends of the coil portion31. In the magnetic field generator1configured in such manner, a standing wave can be generated by a high-frequency current flowing through the upper layer coil20and the lower layer coil30. For example, as shown inFIGS.17and18, the upper layer coil20has, for an illustration of the phase, four points G to K, substantially at every 90° intervals with respect to the coil central axis from one end on a first lead portion23aside to the other end on a second lead portion23bside. In such case, for example, a high-frequency current is supplied to both ends of the coil portion21as an input I and an input II, and the phase difference of the high-frequency current is set to 0° (i.e., no phase difference among the inputs I and II). In such manner, as shown inFIG.19A, a standing wave having the positions of points G, I, and K as nodes and the positions of points H and J as antinodes having the maximum amplitude can be generated by a high-frequency current. In such case, it is possible to generate a high-frequency magnetic field in which the magnetic field direction alternates repeatedly between the following two directions, i.e., in a direction from the point H to the point J and a direction from the point J to the point H. Further, for example, when the phase difference of the high-frequency currents flowing from both ends of the coil portion21is set to 180°, as shown inFIG.19B, a standing wave with the positions of the points G, I, and K as an anti-node and the positions of the points H and J as a node can be generated. In such case, it is possible to generate a high-frequency magnetic field in which the magnetic field direction alternates repeatedly between the following two directions, i.e., in a direction from the point G or the point K to the point I and a direction from the point I to the point G or the point K. Further, though an example of the upper layer coil20has been described with reference toFIGS.18,19A and19B, the same applies to the lower layer coil30. Then, a standing wave having a 180° phase difference is formed respectively in the upper layer coil20and the lower layer coil30. In such manner, matching between the directions of the two magnetic fields is achievable. That is, the direction of the magnetic field generated on a lower layer coil30side of the upper layer portion of the substrate10which has the upper layer coil20provided therein and the direction of the magnetic field generated on an upper layer coil20side of the lower layer portion of the substrate10which has the lower layer coil30provided therein are matchable. Therefore, a high-frequency magnetic field having the XY plane as the magnetic field direction can be generated. In such manner, a standing wave can be generated by allowing a high-frequency current to flow from both ends of the upper layer coil20and the lower layer coil30. Thus, while limiting the magnetic field direction to a certain/specific direction(s), a high-frequency magnetic field having alternate magnetic field directions of 180° is generatable on the XY plane. Note that the phase adjuster80generating high-frequency currents input from both of the first lead portion23aand the second lead portion23bbased on one signal source of the upper layer power source40in the present disclosure may be modifiable. That is, the phase adjuster80not only adjusts the phase based on one signal and divides it into two signals, but may also adjust the phase of each signal using the two signals and outputs the adjusted as a high-frequency current. Of course, the same applies to the phase adjuster90. Sixth Embodiment The sixth embodiment is described. The present embodiment is the same as the first to fourth embodiments in that the form of the high-frequency current input to the upper layer coil20and the lower layer coil30is changed from the first to fourth embodiments. Therefore, only the parts different from the first to fourth embodiments in the present embodiment are described. In the magnetic field generator1of the present embodiment, as shown inFIGS.20A and20B, the same configuration as that of the first embodiment and the like is further modified, for controlling the input direction of the high-frequency current to the upper layer coil20to be switchable, thereby allowing clockwise and counter-clockwise rotation of the magnetic field direction as a circularly polarized wave. For example, as shown inFIGS.20A and20B, an input change switch100is provided between the upper layer coil20and the upper layer power source40and the resistor60, for the switching of the input end of the upper layer coil20which receives an input of a high-frequency current. Further, although not illustrated, the lower layer coil30is also provided with the input change switch100just like the upper layer coil20, at a position between the lower layer coil30and the lower layer power source50or the resister70, for switching an input of the high-frequency current. In such manner, it possible to control the rotation direction of the magnetic field direction on the XY plane. For example, a high-frequency current is input from the first lead portion23ato the upper layer coil20, and a 180° phase difference is added to the lower layer coil30relative to the upper layer coil20for an input of a high-frequency current from the first lead portion33a. In such case, as shown inFIG.20A, the direction of the magnetic field can be rotated rightward, i.e., clockwise, from a point M toward a point L. On the contrary, a high-frequency current is input to the upper layer coil20from the second lead portion23b, and a 180° phase difference is also added to the lower layer coil30relative to the upper layer coil20for an input of a high-frequency current from the second lead portion33b. In such case, as shown inFIG.20B, the direction of the magnetic field can be rotated leftward, i.e., counter-clockwise, from the point L to the point M. In particular, in a magnetic sensor using a diamond NVC (Nitrogen Vacancy Center), circularly polarized waves are used for a high-frequency magnetic field, and by switching the direction of the circularly polarized waves, unpaired electrons can be selectively pumped to either of degeneracy ms=±1. In such manner, a highly sensitive magnetic sensor with excellent minimum resolution can be realized. Seventh Embodiment The seventh embodiment is described. In the present embodiment, the upper layer coil20is changed to a resonance coil with respect to the first to sixth embodiments, and the other parts are the same as those in the first to sixth embodiments. Therefore, only the parts different from the first to sixth embodiments are described in the present embodiment. In the following, a case where the coil portion21of the upper layer coil20and the coil portion31of the lower layer coil30respectively have an annular shape as in the first embodiment is described as an example. However, configuration may also be the one as shown in the second to sixth embodiments. As shown inFIGS.21,22and23, the upper layer coil20is composed of only the coil portion21and the slit22, and a high-frequency current is not supplied from the power source to the upper layer coil20. Then, the high-frequency current is supplied from the power source50only to the lower layer coil30. In such a configuration, when a high-frequency current is passed through the lower layer coil30, the upper layer coil20is magnetically coupled or field-coupled to the lower layer coil30, and the upper layer coil20functions as a resonance coil to generate LC resonance. Therefore, the resonance frequency of the upper layer coil20is adjusted to a frequency in which the length per loop of the coil portion31corresponds to one wavelength. That is, the frequency at which the electric length of the coil portion31becomes one wavelength is set as the resonance frequency. The magnetic field generator1having such a configuration can also be used. In such a configuration, when a high-frequency current is passed through the lower layer coil30, the electric currents of the upper layer coil20and the lower layer coil30at the same angle with respect to the coil central axis are controllable to flow in opposite directions from each other based on the LC resonance. Therefore, as in the first embodiment, a high-frequency magnetic field is generatable as to having the XY plane between the upper layer coil20and the lower layer coil30as the magnetic field direction. Thereby, the same effect as that of the first embodiment is achievable. Although one slit22is formed in the upper layer coil20, it is not necessary to form the slits22, or a plurality of slits22may be formed. The number of slits22and the size of the gap may be appropriately set so that the resonance frequency based on the LC resonance matches the frequency in which the length per loop of the coil portion21is one wavelength. Further, the present embodiment can also be applied to a configuration for generating a standing wave of a high-frequency current as in the fifth embodiment. In such case, the configuration may include the phase adjuster90that supplies high-frequency current from both ends of the coil portion31of the lower layer coil30. Further, the present embodiment can also be applied to the configuration of the sixth embodiment. In such case, since the high-frequency current is not directly supplied from the power source to the upper layer coil20which is the resonance coil, a structure provided with the input change switch100may be applied to the lower layer coil30. As a reference, with respect to the magnetic field generator1of the present embodiment, the current flow and the generated high-frequency magnetic field in the upper layer coil20and the lower layer coil30were investigated by simulation. The simulation results shown inFIGS.24and25were obtained. The direction of the electric current in various parts of the upper layer coil20and the lower layer coil30at an arbitrary timing when a high-frequency current is passed through the lower layer coil30is indicated by arrows inFIG.24. That is, in the lower layer coil30, the direction of the electric current is opposite at point-symmetric positions with respect to the coil central axis. Further, by passing a high-frequency current through the lower layer coil30, a high-frequency current also flows through the upper layer coil20, and the direction of the current is opposite even in the upper layer coil20at a position point-symmetric with respect to the coil central axis. Further, at the same angle position with respect to the coil central axis, the electric currents in the upper layer coil20and the lower layer coil30flow in opposite directions. Then, as shown inFIG.24, the upper layer coil20is in a state where an electric current is generated from an arbitrary place on a slit22side, and the lower layer coil30is in a state where an electric current flows into an arbitrary place on a lead portion33side. Although the arrow indicating the direction of the electric current is shown to protrude from the upper layer coil20on one side of the upper layer coil20opposite to the slit22with respect to the coil central axis, the electric current actually flows from the upper end surface of the upper layer coil20to a side surface thereof, toward an upper-right corner ofFIG.24 Therefore, in the cross section shown inFIG.25, the upper layer coil20generates a clockwise magnetic field, and the lower layer coil30generates a counter-clockwise magnetic field. Therefore, as shown inFIG.25, it can be seen that, (a) a high-frequency magnetic field can be generated in the left direction of the paper surface at the position of the diamond2, and (b) a high-frequency magnetic field can be generated in the XY plane. OTHER EMBODIMENTS Although the present disclosure is described with reference to the embodiments described above, the present disclosure is not limited to such embodiments but may include various changes and modifications which are within equivalent ranges. In addition, various combinations and forms, as well as other combinations and forms including only one element, more than that, or less than that, are also within the scope and idea of the present disclosure. For example, in each of the above embodiments, a structure in which the upper layer coil20and the lower layer coil30are provided in one substrate10and integrated is given as an example. However, this is only an example, and the substrate10may be divided into a plurality of sheets/layers, and may have a structure, in which an upper layer portion having the upper layer coil20and a lower layer portion having the lower layer coil30may separately be provided, and a dielectric film may be sandwiched therebetween. In such case, at least a portion of the substrate10between the upper layer coil20and the lower layer coil30may be made of/filled with a dielectric material. Further, although a case where the upper layer coil20and the lower layer coil30are respectively provided with two layers has been described in the second embodiment, each of the coils20,30may have only one layer or may have a plurality of layers, i.e., may have two or more layers. The number of layers of the upper layer coil20and the number of layers of the lower layer coil30may be the same or different. Further, even in a structure in which the upper layer coil20and the lower layer coil30have one layer or a plurality of layers, as described in the third embodiment, the slits and the lead portions may be arranged at different angles with respect to the coil central axis. Further, in the fourth embodiment, a rectangular shape is given as one example of a case where the coil portion21of the upper layer coil20and the coil portion31of the lower layer coil30may have a polygonal shape. However, this is also only an example, and the shape of the coil portion may also be a quadrangle other than a rectangle, for example, a rhombus, or may also be a polygon other than a quadrangle such as a triangle or a pentagon. Of course, the circular/ring shape may also be an elliptical shape, or each corner of the polygonal shape may be rounded. In the first to third and fifth to seventh embodiments, since the coil portion21of the upper layer coil20and the coil portion31of the lower layer coil30are formed in an annular shape, the shape of the coil on the XY plane has the aspect ratio of vertical/horizontal dimensions as 1:1, when assuming that one direction of the XY plane is a vertical direction and another direction perpendicular to it is a horizontal direction. However, if the coil portion21and the coil portion31have an elliptical shape, the aspect ratios can be made different, and a strong high-frequency magnetic field can be generated in a direction along one side with a smaller aspect ratio. Of course, even when the coil portion21and the coil portion31have a polygonal shape other than a rectangle, a strong high-frequency magnetic field can be generated in a direction along one side with the smaller aspect ratio, i.e., by making the aspect ratio different from 1:1. Further, in each of the above embodiments, the case where (a) the first conductive material constituting the upper layer coil20and the second conductive material constituting the lower layer coil30are copper and (b) the material of the substrate10is an epoxy-based resin material has been described as an example. However, such a configuration is only an example, and other materials may also be used. It may be preferable that the constituent materials of the upper layer coil20and the lower layer coil30are the same, but different constituent materials may also be used. Further, in each of the above embodiments, the diamond2has been described as an example of the magnetic field measuring element, but an object other than the diamond2may also be used. Further, in each of the above embodiments, the case where the magnetic field generator1is applied to a magnetic sensor that receives a fluorescence by irradiating it with a laser beam has been described. However, it can also be applied to the method of (a) having an electric signal by irradiating a laser beam or (b) obtaining electrical output by inputting an electric signal. That is, it can be applied to PDMR (Photocurrent Detection Magnetic Resonance), EDMR (Electric Detection Magnetic Resonance) and the like. In each of the above embodiments, the expressions “upper and lower” are used as the upper layer coil20and the lower layer coil30, but it is only shown that the coils constituting the two loop circuits are overlapped and lined up at a predetermined distance, and it does not mean the direction of a top and bottom.
54,052
11860248
MODE FOR CARRYING OUT THE INVENTION Embodiments of the present disclosure will now be described in the following order. 1 First embodiment (apparatus for measuring magnetic characteristics) 2 Second embodiment (apparatus for measuring magnetic characteristics) 1 First Embodiment [Configuration of Magnetic Tape] A configuration of a magnetic tape10of which magnetic characteristics are measured by an apparatus for measuring magnetic characteristics according to a first embodiment will now be described with reference toFIG.1. The magnetic tape10is a coating-type magnetic tape of a perpendicular magnetic recording system, and includes a long-length substrate11, a ground layer (nonmagnetic layer)12provided on one surface of the substrate11, a magnetic layer (recording layer)13provided on the ground layer12, and a back layer14provided on the other surface of the substrate11. Note that the ground layer12and the back layer14are provided as necessary, and may not be provided. In the following, out of both surfaces of the magnetic tape10, the surface on the side on which the magnetic layer13is provided is referred to as a magnetic surface10S1, and the surface on the opposite side to it, that is, the side on which the back layer14is provided is referred to as a back surface10S2. The magnetic layer13contains, for example, a magnetic powder, a binder, and electrically conductive particles. The magnetic layer13may further contain, as necessary, additives such as a lubricant, a polisher, and an antirust. The magnetic powder is oriented in the thickness direction of the magnetic tape10(the perpendicular direction). As the magnetic powder, for example, ε-iron oxide magnetic powder, Co-containing spinel ferrite magnetic powder, hexagonal ferrite magnetic powder (for example, barium ferrite magnetic powder), or the like is used. [Film Formation Apparatus for Magnetic Tapes] A film formation apparatus20used for the film formation of the magnetic tape10described above will now be described with reference toFIG.2. Herein, a case where only the magnetic layer13is formed as a film on one surface of the substrate11is described for easier description. The film formation apparatus20is a film formation apparatus of a roll-to-roll form, and includes rolls21and22, a film formation head23, a drying furnace24, and an apparatus for measuring magnetic characteristics30. In the film formation apparatus20, a film-like substrate11wound in a roll form is wound out from one roll21, and is wound in a roll form again by the other roll22. The film formation head23, the drying furnace24, and the apparatus for measuring magnetic characteristics30are arranged in this order from the upstream side toward the downstream side on the running path of the substrate11that continuously moves (continuously runs) from one roll21toward the other roll22. A magnetic field orientation apparatus for orienting the magnetic field of a magnetic powder contained in a coating material13ato the perpendicular direction (the thickness direction of the substrate11) may be provided in the drying furnace24. In the film formation apparatus20having the configuration mentioned above, the coating material13ais applied by the film formation head23to one surface of the continuously running substrate11, and then the coating material (coating)13ais dried by the drying furnace24; thus, the magnetic layer13is formed. Then, magnetic characteristics of the magnetic layer13immediately after formation are measured by the apparatus for measuring magnetic characteristics30. To stabilize film formation quality and improve the yield while maintaining productivity, it is desired that, in the process during production, magnetic characteristics necessary as quality be continuously measured and precise, quick feedback to the film formation process be made. To measure magnetic characteristics in this process, (A) measuring magnetic characteristics without breaking the magnetic tape10and (B) measuring magnetic characteristics in a state where the magnetic tape10continuously moves are necessary. In the first embodiment, in order to enable (A) measuring magnetic characteristics without breaking the magnetic tape10, magnetic characteristics of the magnetic tape10are measured by utilizing the magnetic Kerr effect. The magnetic Kerr effect is a phenomenon in which, in a case where a magnetized surface is irradiated with polarized light, the light polarization state (the angle of the polarization axis or the ellipticity) of reflected light changes in accordance with the magnetization state of the reflection surface. An external magnetic field is applied to a measurement sample, and the light polarization state based on the magnetic Kerr effect is measured while the strength of the external magnetic field is continuously changed; thereby, data equivalent to magnetic hysteresis are obtained, and magnetic characteristics such as coercive force or magnetization can be measured in a substitutive manner without breaking the measurement sample. In order to enable (B) measuring magnetic characteristics in a state where the magnetic tape10continuously moves, a configuration including three measurement units in each of which an electromagnet that applies an external magnetic field and a detection section that utilizes the magnetic Kerr effect, that is, uses the magnetic Kerr effect to measure the magnetization state of the magnetic tape10are combined is employed. [Apparatus for Measuring Magnetic Characteristics] FIG.3shows a configuration of the apparatus for measuring magnetic characteristics30according to the first embodiment. The apparatus for measuring magnetic characteristics30includes a plurality of guide rolls31, a negative-side saturation magnetization measurement section32, a positive-side saturation magnetization measurement section33, a magnetization measurement section34, and a personal computer (hereinafter referred to as a “PC”)35. The negative-side saturation magnetization measurement section32, the positive-side saturation magnetization measurement section33, and the magnetization measurement section34are arranged in this order from the upstream side toward the downstream side on the conveyance path of the moving magnetic tape10. In the present specification, out of the thickness directions of the magnetic tape10, the direction from the magnetic surface10S1toward the back surface10S2is referred to as a first perpendicular direction10D1, and the opposite direction of it is referred to as a second perpendicular direction10D2. Further, a state where an external magnetic field is applied to the magnetic tape10in the first perpendicular direction10D1and magnetic saturation is produced is referred to as a “negative-side magnetic saturation state”, and the magnetization at this time is referred to as “negative-side saturation magnetization”. On the other hand, a state where an external magnetic field is applied to the magnetic tape10in the second perpendicular direction10D2and magnetic saturation is produced is referred to as a “positive-side magnetic saturation state”, and the magnetization at this time is referred to as “positive-side saturation magnetization”. Each of the negative-side saturation magnetization measurement section32, the positive-side saturation magnetization measurement section33, and the magnetization measurement section34has a configuration that can obtain hysteresis like that shown in FIG.4(a relationship of the voltage value equivalent to the magnetization state of the magnetic tape10to the strength of the external magnetic field). However, in the first embodiment, as described below, a control to obtain such a hysteresis loop is not performed in the negative-side saturation magnetization measurement section32, the positive-side saturation magnetization measurement section33, or the magnetization measurement section34. Note thatFIG.4is an example of measurement in which the external magnetic field is changed while the magnetic tape10is kept at a standstill. (Guide Rolls) The plurality of guide rolls31is an example of a conveyance section, and is provided on the conveyance path of the magnetic tape10. The plurality of guide rolls31continuously moves (continuously runs) the magnetic tape10in a direction (for example, the horizontal direction) going straight relative to the direction of the external magnetic field applied by each of the negative-side saturation magnetization measurement section32, the positive-side saturation magnetization measurement section33, and the magnetization measurement section34. One of the plurality of guide rolls31is connected to an encoder31a, and the encoder31asupplies a pulse signal to the PC35in accordance with the rotation of the guide roll31. (Negative-Side Saturation Magnetization Measurement Section) The negative-side saturation magnetization measurement section32applies an external magnetic field to the continuously moving magnetic tape10in the first perpendicular direction10D1to magnetically saturate the magnetic tape10on the negative side, applies polarized light to the magnetic surface10S1of the magnetic tape10to which the external magnetic field is being applied, and measures the polarization axis angle θ1of reflected light affected by the negative-side magnetic saturation state (hereinafter referred to as “the polarization axis angle θ1of the negative-side magnetic saturation state”). Note that the polarization axis angle θ1of the negative-side magnetic saturation state is an example of a measurement value of the light polarization state of reflected light affected by the negative-side magnetic saturation state. The negative-side saturation magnetization measurement section32includes an electromagnet32a, a power source32b, and a light polarization detection section32c. The electromagnet32ais an example of a magnetic field generation section, and applies an external magnetic field to the magnetic tape10in the first perpendicular direction10D1. Specifically, the electromagnet32ais capable of applying, in the first perpendicular direction10D1, an external magnetic field having enough strength to magnetically saturate the magnetic tape10on the negative polarity side. The power source32bis a power source for an electromagnet for driving the electromagnet32a. The light polarization detection section32cincludes an irradiation section32c1, a light receiving section32c2, and a polarization axis angle detection circuit32c3. The irradiation section32c1applies polarized light to the magnetic surface10S1of the magnetic tape10located in the external magnetic field applied by the electromagnet32a. The light receiving section32c2converts reflected light reflected at the magnetic surface10S1to an electrical signal by using a polarizing beam splitter, a photodetector, etc., and supplies the signal to the polarization axis angle detection circuit32c3. The polarization axis angle detection circuit32c3detects the polarization axis angle θ1of the reflected light on the basis of the signal supplied from the light receiving section32c2, and supplies the polarization axis angle θ1to the PC35. The polarization axis angle detection circuit32c3is an example of a light polarization state detection circuit that detects the light polarization state of reflected light on the basis of a signal supplied from the light receiving section32c2. (Positive-Side Saturation Magnetization Measurement Section) The positive-side saturation magnetization measurement section33applies an external magnetic field to the continuously moving magnetic tape10in the second perpendicular direction10D2to magnetically saturate the magnetic tape10on the positive side, applies polarized light to the magnetic surface10S1of the magnetic tape10to which the external magnetic field is being applied, and measures the polarization axis angle θ2of reflected light affected by the positive-side magnetic saturation state (hereinafter referred to as “the polarization axis angle θ2of the positive-side magnetic saturation state”). Note that the polarization axis angle θ2of the positive-side magnetic saturation state is an example of a measurement value of the light polarization state of reflected light affected by the positive-side magnetic saturation state. The positive-side saturation magnetization measurement section33includes an electromagnet33a, a power source33b, and a light polarization detection section33c. The electromagnet33ais an example of a magnetic field generation section, and applies an external magnetic field to the magnetic tape10in the second perpendicular direction10D2. Specifically, the electromagnet33ais capable of applying, in the second perpendicular direction10D2, an external magnetic field having enough strength to magnetically saturate the magnetic tape10on the positive polarity side. The power source33bis a power source for an electromagnet for driving the electromagnet33a. The light polarization detection section33cincludes an irradiation section33c1, a light receiving section33c2, and a polarization axis angle detection circuit33c3. The irradiation section33c1applies polarized light to the magnetic surface10S1of the magnetic tape10located in the external magnetic field applied by the electromagnet33a. The light receiving section33c2converts reflected light reflected at the magnetic surface10S1to an electrical signal by using a polarizing beam splitter, a photodetector, etc., and supplies the signal to the polarization axis angle detection circuit33c3. The polarization axis angle detection circuit33c3detects the polarization axis angle θ2of the reflected light on the basis of the signal supplied from the light receiving section33c2, and supplies the polarization axis angle θ2to the PC35. The polarization axis angle detection circuit33c3is an example of a light polarization state detection circuit that detects the light polarization state of reflected light on the basis of a signal supplied from the light receiving section33c2. (Magnetization Measurement Section) The magnetization measurement section34applies an external magnetic field to the continuously moving magnetic tape10in the first perpendicular direction10D1to demagnetize the magnetic tape10, applies polarized light to the magnetic surface10S1of the magnetic tape10to which the external magnetic field is being applied, and measures the polarization axis angle θ3of reflected light (hereinafter referred to as “the polarization axis angle θ3of a demagnetization state”). Note that the polarization axis angle θ3of the demagnetization state is an example of a measurement value of the light polarization state of reflected light reflected at the magnetic surface10S1in the demagnetization state. The magnetization measurement section34includes an electromagnet34a, a power source34b, and a light polarization detection section34c. The electromagnet34ais an example of a magnetic field generation section, and applies an external magnetic field to the magnetic tape10in the first perpendicular direction10D1to demagnetize the magnetic tape10magnetized by the positive-side saturation magnetization measurement section33. The power source33bis a power source for an electromagnet for driving the electromagnet34a. The light polarization detection section34cincludes an irradiation section34c1, a light receiving section34c2, and a polarization axis angle detection circuit34c3. The irradiation section34c1applies polarized light to the magnetic surface10S1of the magnetic tape10located in the external magnetic field applied by the electromagnet34a. The light receiving section34c2converts reflected light reflected at the magnetic surface10S1to an electrical signal by using a polarizing beam splitter, a photodetector, etc., and supplies the signal to the polarization axis angle detection circuit34c3. The polarization axis angle detection circuit34c3detects the polarization axis angle θ3of the reflected light on the basis of the signal supplied from the light receiving section34c2, and supplies the polarization axis angle θ3to the PC35. The polarization axis angle detection circuit34c3is an example of a light polarization state detection circuit that detects the light polarization state of reflected light on the basis of a signal supplied from the light receiving section34c2. (PC) The PC35is an example of a control section, and controls the whole of the apparatus for measuring magnetic characteristics30. Specifically, the PC35controls the plurality of guide rolls31, the negative-side saturation magnetization measurement section32, the positive-side saturation magnetization measurement section33, and the magnetization measurement section34. Further, the PC35controls, in addition to the apparatus for measuring magnetic characteristics30, the whole of the film formation apparatus20for the magnetic tape10. Although herein a case where the PC35controls both the apparatus for measuring magnetic characteristics30and the film formation apparatus20for the magnetic tape10is described, the apparatus for measuring magnetic characteristics30and the film formation apparatus20for the magnetic tape10may be controlled by different control apparatuses. The PC35includes a pulse counter board35a; the pulse counter board35acounts pulse signals supplied from the encoder31a, and calculates the movement distance of the continuously moving magnetic tape10. The PC35includes a D/A board35b, and controls the power sources32b,33b, and34bvia the D/A board35bto adjust the output magnetic field strengths of the electromagnets32a,33a, and34a. The PC35includes an A/D board35c, and takes in the polarization axis angle θ1of the negative-side magnetic saturation state, the polarization axis angle θ2of the positive-side magnetic saturation state, and the polarization axis angle θ3of the demagnetization state that are supplied from the polarization axis angle detection circuits32c3,33c3, and34c3, respectively, via the A/D board35c. At the time of the data taking-in, the PC35associates data and the measurement position on the magnetic tape10together on the basis of the count value of the pulse counter board35a. The PC35calculates the mean value θ0(=(θ1+θ2)/2) of the polarization axis angle θ1of the negative-side magnetic saturation state supplied from the negative-side saturation magnetization measurement section32and the polarization axis angle θ2of the positive-side magnetic saturation state supplied from the positive-side saturation magnetization measurement section33. Then, the output magnetic field strength of the electromagnet34ais adjusted via the D/A board35and the power source34bso that the polarization axis angle θ3of the demagnetization state supplied from the magnetization measurement section34is equal to the mean value θ0. In the following, this control is referred to as “demagnetization control”, as appropriate. Specifically, the value of current to be supplied to the electromagnet34ais controlled via the D/A board35and the power source34b, and thereby the output magnetic field strength of the electromagnet34ais adjusted. Since the magnetic tape10continuously moves without stopping, the PC35continues the demagnetization control mentioned above while constantly managing the position of the magnetic tape10. The PC35adjusts the strength of the magnetic field in the magnetization measurement section34by using data acquired in the same position of the continuously moving magnetic tape10(that is, the polarization axis angle θ1of the negative-side magnetic saturation state, the polarization axis angle θ2of the positive-side magnetic saturation state, and the polarization axis angle θ3of the demagnetization state). That is, the PC35performs demagnetization control in the magnetization measurement section34on the basis of data that is acquired when the magnetic tape10existing in the magnetization measurement section34is located in the negative-side saturation magnetization measurement section32and the positive-side saturation magnetization measurement section33(that is, the polarization axis angle θ1of the negative-side magnetic saturation state and the polarization axis angle θ2of the positive-side magnetic saturation state). By performing an arithmetic operation by using data of the same position on the magnetic tape10, a light polarization variation resulting from a difference in position on the magnetic tape10(in a case where film thickness or film quality varies, also this variation is included) can be canceled. Thereby, a result that can substitute for part of conventional measurement in a standstill state is obtained even for a continuously moving magnetic tape10. There is a case where, immediately after demagnetization control is started, agreement with the mean value θ0mentioned above is not made fully (that is, demagnetization is not made fully); however, feedback whereby the demagnetization state can be maintained is achieved by continuously performing demagnetization control. The PC35acquires, as the coercive force, the strength of the magnetic field applied by the electromagnet34ain a state where the demagnetization state is maintained successfully. Specifically, the PC35assesses whether the polarization axis angle θ3of the demagnetization state is equal to the mean value θ0or not; the PC35converts the value of current that is supplied to the electromagnet34awhen the polarization axis angle θ3of the demagnetization state becomes the mean value θ0to magnetic field strength, and acquires the coercive force. The PC35may display the measurement result of coercive force on a display apparatus of the PC35, or may output the measurement result to an external device, as necessary. [Operation of Apparatus for Measuring Magnetic Characteristics] Hereinbelow, an operation of the apparatus for measuring magnetic characteristics30having the configuration described above is described with reference toFIG.5. First, a worker uses the PC35to execute a manipulation of the start of film formation of the magnetic tape10; then, in step S1, the PC35drives the plurality of guide rolls31to continuously move the magnetic tape10in a direction (for example, the horizontal direction) going straight relative to the direction of the external magnetic field applied by each of the negative-side saturation magnetization measurement section32, the positive-side saturation magnetization measurement section33, and the magnetization measurement section34. Next, in step S2, the PC35controls the negative-side saturation magnetization measurement section32to apply a sufficiently large external magnetic field (for example, −15 kOe or the like) to the continuously moving magnetic tape10in the first perpendicular direction10D1, and magnetically saturates the magnetic tape10on the negative side. Further, polarized light is applied to the magnetic surface10S1of the magnetic tape10existing in the external magnetic field, and the polarization axis angle θ1of reflected light of the polarized light (that is, the polarization axis angle θ1of the negative-side magnetic saturation state) is measured and is supplied to the PC35. Next, in step S3, the PC35controls the positive-side saturation magnetization measurement section33to apply a sufficiently large external magnetic field (for example, +15 kOe or the like) to the continuously moving magnetic tape10in the second perpendicular direction10D2, and magnetically saturates the magnetic tape10on the positive side. Further, polarized light is applied to the magnetic surface10S1of the magnetic tape10existing in the external magnetic field, and the polarization axis angle θ2of reflected light of the polarized light (that is, the polarization axis angle θ2of the positive-side magnetic saturation state) is measured and is supplied to the PC35. Next, in step S4, the PC35controls the magnetization measurement section34to apply an external magnetic field to the continuously moving magnetic tape10in the first perpendicular direction10D1, and demagnetizes the magnetic tape10. Specifically, the value of current to be supplied to the electromagnet34ais controlled via the D/A board35and the power source34b, and thereby the magnetic tape10is demagnetized. Further, polarized light is applied to the magnetic surface10S1of the magnetic tape10existing in the external magnetic field applied by the magnetization measurement section34, and the polarization axis angle θ3of reflected light of the polarized light (that is, the polarization axis angle θ3of the demagnetization state) is measured and is supplied to the PC35. Next, in step S5, the PC35calculates the mean value θ0(=(θ1+θ2)/2) of the polarization axis angle θ1measured in step S2and the polarization axis angle θ2measured in step S3. Then, it is assessed whether the polarization axis angle θ3measured in step S4is equal to the calculated mean value θ0or not. Whether the magnetization of the magnetic tape10is zero or not can be assessed by thus assessing whether the polarization axis angle θ3is equal to the mean value θ0or not. In a case where it is assessed that the polarization axis angle θ3measured in step S4is equal to the mean value θ0, in step S6, the PC35converts the value of current that is supplied to the electromagnet34awhen it is assessed that the values are equal as mentioned above to magnetic field strength, and acquires the coercive force. On the other hand, in a case where in step S6it is assessed that the polarization axis angle θ3measured in step S5is not equal to the mean value θ0, in step S7, the PC35adjusts the magnetic field strength of the magnetization measurement section34so that the polarization axis angle θ3measured in step S4is equal to the mean value θ0. Specifically, the value of current to be supplied to the electromagnet34ais controlled via the D/A board35and the power source34bso that the polarization axis angle θ3measured in step S4is equal to the mean value θ0, and thereby the output magnetic field strength of the electromagnet34ais adjusted. Note that, since the magnetic tape10continuously moves, the processing of steps S1to S7is set to be continuously performed without a break. Further, the encoder31a, etc. are used to manage the position of the magnetic tape10so that the movement distance of the magnetic tape10can be grasped; thus, the measurement, comparison, adjustment, etc. of steps S2to S7are allowed to be performed in the same position on the magnetic tape10. Table 1 shows a result of measurement in which the coercive force of a magnetic tape10with the coercive force set to 2.5 [kOe] was measured repeatedly 10 times by using the apparatus for measuring magnetic characteristics30according to the first embodiment. From Table 1, it can be seen that, in the apparatus for measuring magnetic characteristics30according to the first embodiment, the coercive force of the continuously moving magnetic tape10can be measured with good precision in a non-destructive, non-contact manner. TABLE 1Number of times ofMeasurement value ofmeasurementcoercive force (kOe)12.5722.6232.6742.5752.4162.4172.4182.3692.41102.36Average2.483 sigma0.35 [Effects] In the apparatus for measuring magnetic characteristics30according to the first embodiment, the coercive force (magnetic characteristics) of the magnetic tape10can be measured without bringing the continuously moving magnetic tape10to a standstill or breaking the magnetic tape10. In a roll-to-roll film formation process, magnetic characteristics of the magnetic tape10can be measured quickly without interrupting running-based film formation or breaking the magnetic tape10; therefore, a measurement result of magnetic characteristics can be fed back to the film formation process rapidly and appropriately. Thus, the yield can be improved. Further, the occurrence of defects can be further suppressed by controlling film formation conditions on the basis of feedback of a measurement result of magnetic characteristics so that the characteristics fall within a range narrower than the standard value range of the process. 2 Second Embodiment [Apparatus for Measuring Magnetic Characteristics] FIG.6shows a configuration of an apparatus for measuring magnetic characteristics30A according to a second embodiment. The apparatus for measuring magnetic characteristics30A differs from the apparatus for measuring magnetic characteristics30according to the first embodiment in that the apparatus for measuring magnetic characteristics30A includes a residual magnetization measurement section36in place of the magnetization measurement section34. (Residual Magnetization Measurement Section) The residual magnetization measurement section36applies polarized light to the magnetic surface10S1of the magnetic tape10to which an external magnetic field is not applied, and measures the polarization axis angle θ4of reflected light (hereinafter referred to as “the polarization axis angle θ4of a residual magnetization state”). Note that the polarization axis angle θ4of the residual magnetization state is an example of a measurement value of the light polarization state of reflected light affected by residual magnetization. The residual magnetization measurement section36includes a light polarization detection section36c. The light polarization detection section36cincludes an irradiation section36c1, a light receiving section36c2, and a polarization axis angle detection circuit36c3. The irradiation section36c1applies polarized light to the magnetic surface10S1of the magnetic tape10whose the external magnetic field is not applied by the electromagnet. The light receiving section36c2converts reflected light reflected at the magnetic surface10S1to an electrical signal by using a polarizing beam splitter or a photodetector, and supplies the signal to the polarization axis angle detection circuit36c3. The polarization axis angle detection circuit36c3detects the polarization axis angle θ4of the reflected light on the basis of the signal supplied from the light receiving section36c2, and supplies the polarization axis angle θ4to the PC35. The polarization axis angle detection circuit36c3is an example of a light polarization state detection circuit that detects the light polarization state of reflected light on the basis of a signal supplied from the light receiving section36c2. (PC) The PC35is an example of an arithmetic section, and controls the whole of the apparatus for measuring magnetic characteristics30A. Specifically, the PC35controls the plurality of guide rolls31, the negative-side saturation magnetization measurement section32, the positive-side saturation magnetization measurement section33, and the residual magnetization measurement section36. The PC35takes in the polarization axis angle θ1of the negative-side magnetic saturation state, the polarization axis angle θ2of the positive-side magnetic saturation state, and the polarization axis angle θ4of the residual magnetization state that are supplied from the polarization axis angle detection circuits32c3,33c3, and36c3, respectively, via the A/D board35c. At the time of the data taking-in, the PC35associates data and the measurement position on the magnetic tape10together on the basis of the count value of the pulse counter board35a. The PC35calculates the mean value θ0(=(θ1+θ2)/2) of the polarization axis angle θ1of the negative-side magnetic saturation state supplied from the negative-side saturation magnetization measurement section32and the polarization axis angle θ2of the positive-side magnetic saturation state supplied from the positive-side saturation magnetization measurement section33. Then, the difference Δθ10(=θ1−θ0) between the polarization axis angle θ1of the negative-side magnetic saturation state and the mean value θ0and the difference Δθ40(=θ4−θ0) between the polarization axis angle θ4of the residual magnetization state and the mean value θ0are calculated. When performing the calculation, since data and the measurement position on the magnetic tape10are associated together beforehand, it is preferable that an arithmetic operation be performed using data of the same position on the magnetic tape10and a light polarization variation resulting from a difference in position on the magnetic tape10(in a case where film thickness or film quality varies, also this variation is included) be canceled. Difference Δθ10means the polarization axis angle θ1′ (=θ1−θ0) of the negative-side magnetic saturation state with the mean value θ0as a reference. Difference Δθ40means the polarization axis angle θ4′ (=θ4−θ0) of the residual magnetization state with the mean value θ0as a reference. Difference Δθ10is an example of a measurement value of the light polarization state of the negative-side magnetic saturation state using, as a reference, the mean value of the measurement value of the light polarization state of the negative-side magnetic saturation state and the measurement value of the light polarization state of the positive-side magnetic saturation state. Difference Δθ40is an example of a measurement value of the light polarization state of the residual magnetization state using, as a reference, the mean value of the measurement value of the light polarization state of the negative-side magnetic saturation state and the measurement value of the light polarization state of the positive-side magnetic saturation state. The PC35uses the calculated differences Δθ10and Δθ40to calculate the ratio (Δθ40/Δθ10) of difference Δθ40to difference Δθ10, and obtains the squareness ratio. Note that the PC35calculates the squareness ratio described above by using data (that is, the polarization axis angle θ1of the negative-side magnetic saturation state, the polarization axis angle θ2of the positive-side magnetic saturation state, and the polarization axis angle θ4of the residual magnetization state) acquired in the same position of the continuously moving magnetic tape10. As described above, the PC35takes in and stores measurement data and the measurement position on the magnetic tape10while associating them together, and can therefore calculate the squareness ratio by using measurement data in the same position of the magnetic tape10. [Method for Measuring Magnetic Characteristics] Hereinbelow, an operation of the apparatus for measuring magnetic characteristics30A having the configuration described above is described with reference toFIG.7. First, a worker uses the PC35to execute a manipulation of the start of film formation of the magnetic tape10; then, in step S11, the PC35continuously moves the magnetic tape10like in step S1in the first embodiment. Next, in step S12, the PC35measures the polarization axis angle θ1(that is, the polarization axis angle θ1of the negative-side magnetic saturation state) in a similar manner to step S2in the first embodiment, and supplies the polarization axis angle θ1to the PC35itself Next, in step S13, the PC35measures the polarization axis angle θ2(that is, the polarization axis angle θ2of the positive-side magnetic saturation state) in a similar manner to step S3in the first embodiment, and supplies the polarization axis angle θ2to the PC35itself Next, in step S14, the PC35applies polarized light to the magnetic surface10S1of the continuously running magnetic tape10to which an external magnetic field is not applied by an electromagnet, measures the polarization axis angle θ4of reflected light of the polarized light (that is, the polarization axis angle θ4of the residual magnetization state), and supplies the polarization axis angle θ4to the PC35itself Next, in step S15, the PC35calculates the mean value θ0(=(θ1+θ2)/2) of the polarization axis angle θ1measured in step S12and the polarization axis angle θ2measured in step S13. Next, in step S16, the PC35calculates the difference Δθ10=θ1−θ0between the polarization axis angle θ1measured in step S12and the mean value θ0calculated in step S15. Further, the PC35calculates the difference Δθ40=θ4−θ0between the polarization axis angle θ4measured in step S14and the mean value θ0calculated in step S15. Next, in step S17, the PC35uses the differences Δθ10and Δθ40calculated in step S16to calculate the ratio (Δθ40/Δθ10) of difference Δθ40to difference Δθ10, and obtains the squareness ratio. Note that, since the magnetic tape10continuously moves, the processing of steps S11to S17is set to be continuously performed without a break. Further, the encoder31a, etc. are used to manage the position of the magnetic tape10so that the movement distance of the magnetic tape10can be grasped; thus, the measurement of steps S12to S14are allowed to be performed in the same position on the magnetic tape10. [Effects] In the apparatus for measuring magnetic characteristics30A according to the second embodiment, the squareness ratio (magnetic characteristics) of the magnetic tape10can be measured without bringing the continuously moving magnetic tape10to a standstill or breaking the magnetic tape10. Modification Examples Hereinabove, the first and second embodiments of the present disclosure are specifically described; however, the present disclosure is not limited to the first or second embodiment described above, but may make various modifications based on the technical idea of the present disclosure. For example, the configurations, methods, processes, shapes, materials, numerical values, etc. shown in the first and second embodiments described above are only examples, and configurations, methods, processes, shapes, materials, numerical values, etc. different from them may be used as necessary. Further, configurations, methods, processes, shapes, materials, numerical values, etc. of the first and second embodiments described above may be combined with each other without departing from the spirit of the present disclosure. Although the first and second embodiments described above have described a case where magnetic characteristics of a continuously running (continuously moving) magnetic tape10are measured, the present disclosure is not limited to the measurement of magnetic characteristics of the magnetic tape10, but can be applied also to the measurement of magnetic characteristics of a magnetic recording medium other than the magnetic tape10. For example, the present disclosure can be applied also to the measurement of magnetic characteristics of a continuously rotating magnetic disk (for example, a hard disk). In this case, magnetic characteristics (for example, the coercive force, the squareness ratio, or the like) of the continuously rotating magnetic disk can be measured in a non-destructive, non-contact manner. Although the first and second embodiments described above have described apparatuses for measuring magnetic characteristics30and30A in which a magnetic tape10is caused to pass through gap portions of electromagnets32a,33a, and34aand the light polarization state of the magnetic surface10S1of the magnetic tape10passing through the gap portions is measured, the configuration of the apparatus for measuring magnetic characteristics is not limited to this. For example, as shown inFIG.8, the electromagnets32a,33a, and34amay be provided only on one surface side of the magnetic tape10(for example, the magnetic surface10S1side), and the light polarization state of the magnetic surface10S1of the magnetic tape10to which an external magnetic field is being applied by the electromagnet32a,33a, or34amay be measured. Although the first and second embodiments described above have described a case where the apparatuses for measuring magnetic characteristics30and30A are controlled by a PC35, the apparatuses for measuring magnetic characteristics30and30A may be controlled by a dedicated control apparatus or the like in place of the PC35. Although the first embodiment described above has described a method in which the coercive force of the magnetic tape10is measured by the apparatus for measuring magnetic characteristics30, it is also possible to employ a method in which the saturation magnetization of the magnetic tape10is measured by the apparatus for measuring magnetic characteristics30in the following manner. That is, a relationship between the amount of magnetization of the magnetic tape10and the amount of change of the polarization axis angle based on magnetization is found in advance, and a conversion factor is prepared and is stored in the storage section of the PC35in advance. The PC35compares the two polarization axis angles of the negative-side saturation magnetization measurement section32and the positive-side saturation magnetization measurement section33by using measurement values of the same position on the magnetic tape10, and uses the conversion factor mentioned above to convert the difference between the two values to the amount of magnetization; thus, obtains the amount of saturation magnetization. Note that it is also possible to calculate the difference between the mean value of the measurement values of the negative-side and positive-side saturation magnetizations as a reference and the measurement value of the negative-side or positive-side saturation magnetization measurement section and use the difference for conversion. Although the first embodiment described above has described a configuration of an apparatus for measuring magnetic characteristics30that measures the coercive force of the magnetic tape10, also residual magnetization can be measured by employing the following configuration. That is, as shown inFIG.9, the irradiation section32c1and the light receiving section32c2are provided in positions more on the downstream side of the conveyance path of the magnetic tape10than the electromagnet32aand more on the upstream side of the conveyance path of the magnetic tape10than the electromagnet33a. The irradiation section33c1and the light receiving section33c2are provided in positions more on the downstream side of the conveyance path of the magnetic tape10than the electromagnet33a. A relationship between the amount of magnetization of the magnetic tape10and the amount of change of the polarization axis angle based on magnetization is found in advance, and a conversion factor is prepared and is stored in the storage section of the PC35in advance. The apparatus for measuring magnetic characteristics30having the configuration described above operates in the following manner. First, the PC35uses the electromagnet32ato magnetically saturate the magnetic tape10on the negative side, and then uses the light polarization detection section32cto measure the polarization axis angle in a position outside the magnetic field area of the electromagnet32a. Subsequently, further on the downstream side, the magnetic tape10is magnetically saturated on the positive side by the electromagnet33a, and then the polarization axis angle is measured by the light polarization detection section33cin a position outside the magnetic field area of the electromagnet33a. After that, the PC35compares the two polarization axis angles obtained by the light polarization detection section32cand the light polarization detection section33cby using measurement values of the same position on the magnetic tape10, and uses the conversion factor mentioned above to convert the difference between the two values to the amount of magnetization; thus, obtains the amount of residual magnetization. Note that it is also possible to calculate the difference between the mean value of the measurement values in the light polarization detection section32cand the light polarization detection section33cas a reference and the measurement value in the light polarization detection section32cor the light polarization detection section33cand use the difference for conversion. It is also possible to measure the squareness ratio with the apparatus for measuring magnetic characteristics30according to the first embodiment. That is, it is also possible to measure the squareness ratio by a method in which the electromagnet34ais not electrically energized but set in a nonuse state and an operation similar to the operation of the apparatus for measuring magnetic characteristics30A according to the second embodiment is performed. Although the first embodiment described above has described an example in which the value of current that is supplied to the electromagnet34awhen the polarization axis angle θ3of the demagnetization state becomes equal to the mean value θ0is converted to magnetic field strength to obtain the coercive force, the method for measuring the coercive force is not limited to this. For example, it is also possible to employ a method in which the magnetization measurement section34includes a magnetic field measurement section (for example, a Hall element or the like) for measuring the magnetic field strength of the electromagnet34aand the magnetic field strength of the electromagnet34awhen the polarization axis angle θ3of the demagnetization state becomes equal to the mean value θ0is measured by the magnetic field measurement section to obtain the coercive force. Although the first and second embodiments described above have described an example in which a continuously moving magnetic tape10is brought first into the negative-side magnetic saturation state and then into the positive-side magnetic saturation state, the magnetic tape10may be brought first into the positive-side magnetic saturation state and then into the positive-side magnetic saturation state. That is, the placement positions of the negative-side saturation magnetization measurement section32and the positive-side saturation magnetization measurement section33may be reversed. Although the first and second embodiments described above have described a case where magnetic characteristics of a magnetic tape10of a perpendicular magnetic recording system are measured, it is also possible to measure magnetic characteristics of a magnetic tape10of a horizontal magnetic recording system (an in-plane magnetic recording system). In this case, each of the electromagnets32a,33a, and34ais capable of applying an external magnetic field in the longitudinal direction of the magnetic tape10. Note that the directions of application of the magnetic fields of the electromagnets32aand34aand the electromagnet33aare opposite directions like in the first and second embodiments. In the first and second embodiments described above, the PC35may feed a measurement result of magnetic characteristics (the coercive force, the squareness ratio, or the like) back to the film formation process. That is, the PC35may adjust a film formation condition for the magnetic layer13, etc. on the basis of a measurement result of magnetic characteristics so that the magnetic characteristics of the magnetic tape10fall within a prescribed range. More specifically, the PC35compares, on a real time basis, a measurement result of magnetic characteristics measured by the apparatus for measuring magnetic characteristic30or30A with prescribed magnetic characteristics stored in the storage section of the PC35, and makes feedback to the film formation process so that the magnetic characteristics fall within the range of the prescribed magnetic characteristics. That is, a film formation condition for the magnetic layer13, etc. is adjusted so that the magnetic characteristics fall within the range of the prescribed magnetic characteristics. As the film formation condition to be adjusted, for example, at least one of the amount of the coating material13adischarged (that is, the thickness of the magnetic layer13), a drying condition for the coating material13a, the magnetic field strength at the time of orienting the magnetic field, or the like is given. Specifically, for example, the squareness ratio can be changed by adjusting the magnetic field strength and adjusting the orientation state of the magnetic powder immediately after the application of the coating material13a. Further, the amount of saturation magnetization can be adjusted by adjusting the thickness of the magnetic layer13. Although the first and second embodiments described above have described an example in which magnetic characteristics of a coating-type magnetic tape10in which a magnetic layer13, etc. are produced by a coating process (a wet process) are measured, it is also possible to measure magnetic characteristics of a vacuum thin film-type magnetic tape in which a magnetic layer, etc. are produced by a technology for producing a vacuum thin film (a dry process). As the method for producing a vacuum thin film, for example, the sputtering method, the vapor deposition method, or the like is used, but the method is not limited to these. FIG.10shows a configuration of a film formation apparatus40for magnetic tapes that uses the sputtering method to perform film formation to obtain a magnetic layer, etc. The film formation apparatus40for magnetic tapes is a continuous winding-type sputtering apparatus used for the film formation of a seed layer, a ground layer, and a magnetic layer (a recording layer), and includes a film formation chamber41, a drum42that is a metal can (a rotation body), cathodes43ato43c, a supply reel44, a winding reel45, a plurality of guide rolls47ato47cand48ato48c, and an apparatus for measuring magnetic characteristics49. The film formation apparatus40is, for example, a sputtering apparatus of a direct current (DC) magnetron sputtering system, but the sputtering system is not limited to this system. The apparatus for measuring magnetic characteristics49is the apparatus for measuring magnetic characteristics30according to the first embodiment or the apparatus for measuring magnetic characteristics30A according to the second embodiment. However, the PC35, which is a control section, is placed outside the film formation apparatus40for magnetic tapes, and serves also as a control section that controls the film formation apparatus40. Although herein an example in which the film formation apparatus40includes three cathodes43ato43cis described, the number of cathodes is not limited to this, but may be one, two, or four or more. Further, although herein an example in which a seed layer and a ground layer are formed as films as sputtered layers other than the magnetic layer is described, at least one kind of layer of a soft magnetic backing layer (a SUL layer), an intermediate layer, etc. may be formed as a film in place of the seed layer and the ground layer or in combination with the seed layer and the ground layer. The film formation chamber41is connected to a not-illustrated vacuum pump via an air outlet46, and the atmosphere in the film formation chamber41is set at a prescribed degree of vacuum by the vacuum pump. The drum42, the supply reel44, and the winding reel45, each of which has a rotatable configuration, are placed in the interior of the film formation chamber41. In the interior of the film formation chamber41, the plurality of guide rolls47ato47cfor guiding the conveyance of the substrate11between the supply reel44and the drum42is provided, and further the plurality of guide rolls48ato48cfor guiding the conveyance of the substrate11between the drum42and the winding reel45is provided. At the time of sputtering, the substrate11wound out from the supply reel44is wound around the winding reel45via the guide rolls47ato47c, the drum42, and the guide rolls48ato48c. The drum42has a circular columnar shape, and the long-length substrate11is conveyed in agreement with the circular columnar circumferential surface of the drum42. A not-illustrated cooling mechanism is provided in the drum42, and performs cooling to, for example, approximately 20° C. at the time of sputtering. In the interior of the film formation chamber41, the plurality of cathodes43ato43cis arranged facing the circumferential surface of the drum42. A target is set in each of the cathodes43ato43c. Specifically, targets for forming a seed layer, a ground layer, and a magnetic layer as films are set in the cathodes43a,43b, and43c, respectively. A plurality of kinds of films, that is, a seed layer, a ground layer, and a magnetic layer are simultaneously formed as films by the cathodes43ato43c. The film formation apparatus40having the configuration described above can continuously form a seed layer, a ground layer, and a magnetic layer as films by a roll-to-roll method. The PC35compares, on a real time basis, a measurement result of magnetic characteristics measured by the apparatus for measuring magnetic characteristic49with prescribed magnetic characteristics (for example, the holding force and the squareness ratio, or the like) stored in the storage section of the PC35, and makes feedback to the film formation process so that the magnetic characteristics fall within the range of the prescribed magnetic characteristics. That is, a film formation condition for the magnetic layer13, etc. is adjusted so that the magnetic characteristics fall within the range of the prescribed magnetic characteristics. As the film formation condition to be adjusted, for example, at least one of the sputtering electric power, the film running speed, the amount of gas introduced, the kind of gas introduced, the degree of vacuum, or the like is given. Specifically, for example, the coercive force and the squareness ratio can be changed by adjusting the degassing state in the film formation chamber41. Note that, in a case where film formation is performed by a vapor deposition method, examples of the film formation condition to be adjusted include the strength of an electron beam, and the like. Products falling within the range of process standard values can be continuously produced by providing the apparatus for measuring magnetic characteristics49in the film formation apparatus40for magnetic tapes, setting the range of the relevant magnetic characteristics narrower than the standard value range of the process, and performing feedback control as described above. In the first embodiment described above, each of the negative-side saturation magnetization measurement section32, the positive-side saturation magnetization measurement section33, and the magnetization measurement section34may be provided facing the magnetic surface10S1of the magnetic tape10, and may further include a reflecting mirror (a reflection section) that reflects, toward the magnetic surface10S1, polarized light reflected at the magnetic surface10S1. In this case, polarized light incident on the magnetic surface10S1is repeatedly reflected between the magnetic surface10S1and the reflecting mirror, that is, is reflected multiple times at the magnetic surface10S1of the magnetic tape10, and is then received by the light receiving section32c2,33c2, or34c2. Thus, changes of the light polarization state based on the magnetic Kerr effect can be accumulated. Therefore, the measurement sensitivity of the light polarization state can be improved. Similarly, in the second embodiment described above, each of the negative-side saturation magnetization measurement section32, the positive-side saturation magnetization measurement section33, and the residual magnetization measurement section36may be provided facing the magnetic surface10S1of the magnetic tape10, and may further include a reflecting mirror (a reflection section) that reflects, toward the magnetic surface10S1, polarized light reflected at the magnetic surface10S1. Although the first and second embodiments described above have described a case where each of the negative-side saturation magnetization measurement section32, the positive-side saturation magnetization measurement section33, the magnetization measurement section34, and the residual magnetization measurement section36measures a polarization axis angle as the light polarization state of reflected light, it is also possible to measure the ellipticity, the intensity of reflection, etc. instead of the polarization axis angle. In this case, ellipticity measurement circuits, reflection intensity measurement circuits, etc. are used in place of the polarization axis angle detection circuits32c3,33c3,34c3, and36c3. In addition, the present disclosure may be configured by the following configuration. (1) A method for measuring magnetic characteristics, the method including: applying a first magnetic field to a continuously moving magnetic recording medium to magnetically saturate the magnetic recording medium, applying a first polarized light to a surface of the magnetic recording medium to which the first magnetic field is being applied, and measuring a light polarization state of a first reflected light that is reflected; applying a second magnetic field having an opposite direction of the first magnetic field to the continuously moving magnetic recording medium to magnetically saturate the magnetic recording medium, applying a second polarized light to the surface of the magnetic recording medium to which the second magnetic field is being applied, and measuring a light polarization state of a second reflected light that is reflected; applying a third magnetic field having an opposite direction of the second magnetic field to the continuously moving magnetic recording medium, applying a third polarized light to the surface of the magnetic recording medium to which the third magnetic field is being applied, and measuring a light polarization state of a third reflected light that is reflected; and adjusting a strength of the third magnetic field so that a measurement value of the light polarization state of the third reflected light is a mean value of a measurement value of the light polarization state of the first reflected light and a measurement value of the light polarization state of the second reflected light, and obtaining the strength of the third magnetic field when the measurement value of the light polarization state of the third reflected light becomes equal to the mean value. (2) The method for measuring magnetic characteristics according to (1), in which the magnetic recording medium is continuously moved in a direction going straight relative to a direction of each of the first magnetic field, the second magnetic field, and the third magnetic field. (3) The method for measuring magnetic characteristics according to (1) or (2), in which the measurement value of the light polarization state of the first reflected light, the measurement value of the light polarization state of the second reflected light, and the measurement value of the light polarization state of the third reflected light used to adjust the strength of the third magnetic field are acquired in the same position of the continuously moving magnetic recording medium. (4) The method for measuring magnetic characteristics according to any one of (1) to (3), in which each of the first polarized light, the second polarized light, and the third polarized light is reflected multiple times at the surface of the magnetic recording medium. (5) The method for measuring magnetic characteristics according to any one of (1) to (4), in which the light polarization state of the first reflected light, the light polarization state of the second reflected light, and the light polarization state of the third reflected light are a polarization axis angle of the first reflected light, a polarization axis angle of the second reflected light, and a polarization axis angle of the third reflected light, respectively. (6) An apparatus for measuring magnetic characteristics, the apparatus including: a first measurement section configured to apply a first magnetic field to a continuously moving magnetic recording medium to magnetically saturate the magnetic recording medium, apply a first polarized light to a surface of the magnetic recording medium to which the first magnetic field is being applied, and measure a light polarization state of a first reflected light that is reflected; a second measurement section configured to apply a second magnetic field having an opposite direction of the first magnetic field to the continuously moving magnetic recording medium to magnetically saturate the magnetic recording medium, apply a second polarized light to the surface of the magnetic recording medium to which the second magnetic field is being applied, and measure a light polarization state of a second reflected light that is reflected; a third measurement section configured to apply a third magnetic field having an opposite direction of the second magnetic field to the continuously moving magnetic recording medium, apply a third polarized light to the surface of the magnetic recording medium to which the third magnetic field is being applied, and measure a light polarization state of a third reflected light that is reflected; and a control section configured to control the third measurement section to adjust a strength of the third magnetic field so that a measurement value of the light polarization state of the third reflected light is a mean value of a measurement value of the light polarization state of the first reflected light and a measurement value of the light polarization state of the second reflected light, and obtain the strength of the third magnetic field when the measurement value of the light polarization state of the third reflected light becomes equal to the mean value. (7) The apparatus for measuring magnetic characteristics according to (6), the apparatus further including: a conveyance section configured to continuously move the magnetic recording medium in a direction going straight relative to a direction of each of the first magnetic field, the second magnetic field, and the third magnetic field. (8) The apparatus for measuring magnetic characteristics according to (6) or (7), in which the control section adjusts the strength of the third magnetic field by using the measurement value of the light polarization state of the first reflected light, the measurement value of the light polarization state of the second reflected light, and the measurement value of the light polarization state of the third reflected light that are acquired in the same position of the continuously moving magnetic recording medium. (9) The apparatus for measuring magnetic characteristics according to any one of (6) to (8), the apparatus further including: reflection sections provided facing the surface of the magnetic recording medium, in which the first polarized light, the second polarized light, and the third polarized light are repeatedly reflected between the surface of the magnetic recording medium and the reflection sections. (10) The apparatus for measuring magnetic characteristics according to any one of (6) to (9), in which the third measurement section includes a magnetic field measurement section configured to measure the strength of the third magnetic field, and the control section uses the magnetic field measurement section to measure the strength of the third magnetic field when the measurement value of the light polarization state of the third reflected light becomes equal to the mean value. (11) The apparatus for measuring magnetic characteristics according to any one of (6) to (9), in which the third measurement section includes a magnetic field generation section configured to apply the third magnetic field to the continuously moving magnetic recording medium, the control section adjusts the strength of the third magnetic field by controlling a value of current to be supplied to the magnetic field generation section, and the control section converts the value of current that is supplied to the magnetic field generation section when the measurement value of the light polarization state of the third reflected light becomes equal to the mean value to a strength of a magnetic field, and calculates the strength of the third magnetic field when the measurement value of the light polarization state of the third reflected light became equal to the mean value. (12) The apparatus for measuring magnetic characteristics according to any one of (6) to (11), in which the light polarization state of the first reflected light, the light polarization state of the second reflected light, and the light polarization state of the third reflected light are a polarization axis angle of the first reflected light, a polarization axis angle of the second reflected light, and a polarization axis angle of the third reflected light, respectively. (13) A method for manufacturing a magnetic recording medium, the method including: applying a first magnetic field to a continuously moving magnetic recording medium to magnetically saturate the magnetic recording medium, applying a first polarized light to a surface of the magnetic recording medium to which the first magnetic field is being applied, and measuring a light polarization state of a first reflected light that is reflected; applying a second magnetic field having an opposite direction of the first magnetic field to the continuously moving magnetic recording medium to magnetically saturate the magnetic recording medium, applying a second polarized light to the surface of the magnetic recording medium to which the second magnetic field is being applied, and measuring a light polarization state of a second reflected light that is reflected; applying a third magnetic field having an opposite direction of the second magnetic field to the continuously moving magnetic recording medium, applying a third polarized light to the surface of the magnetic recording medium to which the third magnetic field is being applied, and measuring a light polarization state of a third reflected light that is reflected; adjusting a strength of the third magnetic field so that a measurement value of the light polarization state of the third reflected light is a mean value of a measurement value of the light polarization state of the first reflected light and a measurement value of the light polarization state of the second reflected light, and obtaining, as a coercive force, the strength of the third magnetic field when the measurement value of the light polarization state of the third reflected light becomes equal to the mean value; and adjusting a film formation condition for the continuously moving magnetic recording medium on the basis of the coercive force obtained. (14) A method for measuring magnetic characteristics, the method including:applying a first magnetic field to a continuously moving magnetic recording medium to magnetically saturate the magnetic recording medium, applying a first polarized light to a surface of the magnetic recording medium to which the first magnetic field is being applied, and measuring a light polarization state of a first reflected light that is reflected;applying a second magnetic field having an opposite direction of the first magnetic field to the continuously moving magnetic recording medium to magnetically saturate the magnetic recording medium, applying a second polarized light to the surface of the magnetic recording medium to which the second magnetic field is being applied, and measuring a light polarization state of a second reflected light that is reflected;applying light to the surface of the continuously moving magnetic recording medium, and measuring a light polarization state of a third reflected light that is reflected; andcalculating a ratio (ΔA20/ΔA10) of a difference ΔA20(=A2−A0) between a mean value A0of measurement values of the light polarization states of the first reflected light and the second reflected light and a measurement value A2of the light polarization state of the third reflected light to a difference ΔA10(=A1−A0) between the mean value A0and the measurement value A1of the light polarization state of the first reflected light. (15) An apparatus for measuring magnetic characteristics, the apparatus including:a first measurement section configured to apply a first magnetic field to a continuously moving magnetic recording medium to magnetically saturate the magnetic recording medium, apply a first polarized light to a surface of the magnetic recording medium to which the first magnetic field is being applied, and measure a light polarization state of a first reflected light that is reflected;a second measurement section configured to apply a second magnetic field having an opposite direction of the first magnetic field to the continuously moving magnetic recording medium to magnetically saturate the magnetic recording medium, apply a second polarized light to the surface of the magnetic recording medium to which the second magnetic field is being applied, and measure a light polarization state of a second reflected light that is reflected;a third measurement section configured to apply light to the surface of the continuously moving magnetic recording medium, and measure a light polarization state of a third reflected light that is reflected; andan arithmetic section configured to calculate a ratio (ΔA20/ΔA10) of a difference ΔA20(=A2−A0) between a mean value A0of measurement values of the light polarization states of the first reflected light and the second reflected light and a measurement value A2of the light polarization state of the third reflected light to a difference ΔA10(=A1−A0) between the mean value A0and the measurement value A1of the light polarization state of the first reflected light. (16) A method for manufacturing a magnetic recording medium, the method including:applying a first magnetic field to a continuously moving magnetic recording medium to magnetically saturate the magnetic recording medium, applying a first polarized light to a surface of the magnetic recording medium to which the first magnetic field is being applied, and measuring a light polarization state of a first reflected light that is reflected;applying a second magnetic field having an opposite direction of the first magnetic field to the continuously moving magnetic recording medium to magnetically saturate the magnetic recording medium, applying a second polarized light to the surface of the magnetic recording medium to which the second magnetic field is being applied, and measuring a light polarization state of a second reflected light that is reflected;applying light to the surface of the continuously moving magnetic recording medium, and measuring a light polarization state of a third reflected light that is reflected;obtaining a squareness ratio by calculating a ratio (ΔA20/ΔA10) of a difference ΔA20(=A2−A0) between a mean value A0of measurement values of the light polarization states of the first reflected light and the second reflected light and a measurement value A2of the light polarization state of the third reflected light to a difference ΔA10(=A1−A0) between the mean value A0and the measurement value A1of the light polarization state of the first reflected light; andadjusting a film formation condition for the continuously moving magnetic recording medium on the basis of the squareness ratio obtained. REFERENCE SIGNS LIST 10Magnetic tape10S1Magnetic surface10S2Back surface11Substrate12Ground layer13Magnetic layer13aCoating material14Back layer20,40Film formation apparatus21,22Roll23Film formation head24Drying furnace30,30A Apparatus for measuring magnetic characteristics31Guide roll (conveyance section)31aEncoder32Positive-side saturation magnetization measurement section (first measurement section)33Negative-side saturation magnetization measurement section (second measurement section)34Magnetization measurement section (third measurement section)35PC (control section and arithmetic section)36Residual magnetization measurement section (third measurement section)32a,33a,34aElectromagnet (magnetic field generation section)32b,33b,34bPower source32c,33c,34c,36cLight polarization detection section32c1,33c1,34c1,36c1Irradiation section32c2,33c2,34c2,36c2Light receiving section32c3,33c3,34c3,36c3Polarization axis angle detection circuit
72,146
11860249
DETAILED DESCRIPTION Overview The technology relates to rotating sensors such as lidars, scanning radars, cameras and the like, which may be employed with self-driving vehicles and other equipment. An integrated hybrid rotary assembly provides for sensor rotation, in which a single ferrite molded core is shared by a motor, rotary transformer and RF communication link. This hybrid configuration reduces cost, simplifies the manufacturing process, and can improve system reliability with fewer parts. By way of example, the RF link may operate at between 2-50 MHz, up to 100 MHz or more, etc. FIG.1is a perspective view of an exemplary vehicle100, which may operate in autonomous and/or manual driving modes. As shown, the vehicle100includes various sensors for obtaining information about the vehicle's external environment. For instance, a roof-top housing110and dome arrangement112may include a lidar sensor as well as various cameras and/or radar units. Housing120, located at the front end of vehicle100, and housings130a,130bon the driver's and passenger's sides of the vehicle may each store a lidar and/or other sensor(s) such as cameras and radar units. For example, housing130amay be located in along a quarter panel in front of the driver's side door. Vehicle100also includes housings140a,140bfor radar units, lidar and/or cameras also located towards the rear roof portion of the vehicle. Additional lidar, radar units and/or cameras (not shown) may be located at other places along the vehicle100. For instance, arrow150indicates that a sensor unit may be positioned along the rear of the vehicle100, such as on or adjacent to the bumper. While certain aspects of the disclosure may be particularly useful in connection with specific types of vehicles, the vehicle may be any type including, but not limited to, cars, trucks, motorcycles, buses, recreational vehicles, etc. The technology may also be used with other systems and configurations that employ rotating sensors, such as robots, interior and exterior building sensors, etc. FIG.2illustrates a block diagram200showing various systems and components of an example vehicle akin to vehicle100. For instance, the vehicle200may have one or more computing devices, such as computing devices202containing one or more processors204, memory206and other components typically present in general purpose computing devices. The memory206stores information accessible by the one or more processors204, including instructions208and data210that may be executed or otherwise used by the processor(s)204. The memory206may be of any type capable of storing information accessible by the processor, including a computing device-readable medium, or other medium that stores data that may be read with the aid of an electronic device, such as a hard-drive, memory card, ROM, RAM, DVD or other optical disks, as well as other write-capable and read-only memories. Systems and methods may include different combinations of the foregoing, whereby different portions of the instructions and data are stored on different types of media. The instructions208may be any set of instructions to be executed directly (such as machine code) or indirectly (such as scripts) by the processor. For example, the instructions may be stored as computing device code on the computing device-readable medium. In that regard, the terms “instructions” and “programs” may be used interchangeably herein. The instructions may be stored in object code format for direct processing by the processor, or in any other computing device language including scripts or collections of independent source code modules that are interpreted on demand or compiled in advance. The data210may be retrieved, stored or modified by a given processor204in accordance with the instructions208. For instance, although the claimed subject matter is not limited by any particular data structure, the data may be stored in computing device registers, in a relational database as a table having a plurality of different fields and records. XML documents or flat files. The data may also be formatted in any computing device-readable format. The one or more processor204may be any conventional processors, such as commercially available CPUs. Alternatively, the one or more processors may be a dedicated device such as an ASIC or other hardware-based processor. AlthoughFIG.2functionally illustrates the processor(s), memory, and other elements of computing device(s)202as being within the same block, it will be understood by those of ordinary skill in the art that the processor, computing device, or memory may actually include multiple processors, computing devices, or memories that may or may not be stored within the same physical housing. For example, memory may be a hard drive or other storage media located in a housing different from that of computing devices202. Accordingly, references to a processor or computing device will be understood to include references to a collection of processors or computing devices or memories that may or may not operate in parallel, or which may have a distributed architecture. In one example, computing devices202may be control computing devices of an autonomous driving computing system or incorporated into vehicle100ofFIG.1. The autonomous driving computing system may be capable of communicating with various components of the vehicle in order to control the movement of vehicle100according to primary vehicle control code of memory206. For example, computing devices202may be in communication with various systems of vehicle100, such as deceleration system212, acceleration system214, steering system216, signaling system218, navigation system220, positioning system222, perception system224, power system226(e.g., the vehicle's engine or motor) and transmission system228in order to control the movement, speed, etc. of vehicle100in accordance with the instructions208of memory206. The wheels/tires230may be controlled directly by the computing devices202or indirectly via these other systems. Again, although these systems are shown as external to computing devices202, in actuality, these systems may also be incorporated into computing devices202, again as an autonomous driving computing system for controlling vehicle100. As an example, computing devices202may interact with one or more actuators of the deceleration system212and/or acceleration system214, such as brakes, accelerator pedal, and/or the engine or motor226of the vehicle, in order to control the speed of the vehicle. Similarly, one or more actuators of the steering system216, such as a steering wheel, steering shaft, and/or pinion and rack in a rack and pinion system, may be used by computing devices202in order to control the direction of vehicle100. For example, if vehicle100is configured for use on a road, such as a car or truck, the steering system may include one or more actuators to control the angle of wheels to turn the vehicle. Signaling system218may be used by computing devices202in order to signal the vehicle's intent to other drivers or vehicles, for example, by lighting turn signals or brake lights when needed. Navigation system220may be used by computing devices202in order to determine and follow a route to a location. In this regard, the navigation system220and/or data210may store detailed map information, e.g., highly detailed maps identifying the shape and elevation of roadways, lane lines, intersections, crosswalks, speed limits, traffic signals, buildings, signs, real time traffic information, vegetation, or other such objects and information. Positioning system222may be used by computing devices202in order to determine the vehicle's relative or absolute position on a map or on the earth. For example, the position system222may include a GPS receiver or other positioning component to determine the device's latitude, longitude and/or altitude position. Other location systems such as laser-based localization systems, inertial-aided GPS, or camera-based localization may also be used to identify the location of the vehicle. The location of the vehicle may include an absolute geographical location, such as latitude, longitude, and altitude as well as relative location information, such as location relative to other cars immediately around it which can often be determined with less noise than absolute geographical location. The positioning system222may also include other devices in communication with computing devices202, such as an accelerometer, gyroscope or another direction/speed detection device to determine the direction and speed of the vehicle or changes thereto. By way of example only, an acceleration device may determine its pitch, yaw or roll (or changes thereto) relative to the direction of gravity or a plane perpendicular thereto. The device may also track increases or decreases in speed and the direction of such changes. The device's provision of location and orientation data as set forth herein may be provided automatically to the computing devices202, other computing devices and combinations of the foregoing. The perception system224also includes one or more components for detecting objects external to the vehicle such as other vehicles, obstacles in the roadway, traffic signals, signs, trees, etc. For example, the perception system224may include lasers (lidar)232, radar234, cameras236(e.g., optical or infrared) and/or any other detection sensors238that record data which may be processed by computing devices202, such as sonar, microphones, etc. In the case where the vehicle is a passenger vehicle such as a minivan, the minivan may include lidar or other sensors mounted on the roof or other convenient locations as shown inFIG.1. One or more sensors of the perception system224may be rotatable about an axis, for instance to provide a 360° field of view (or less). The perception system may be linked directly to the computing devices202via a dedicated bus and/or share a common communication bus with the other subsystems. The computing devices202may control the direction and speed of the vehicle according to various operation modes which include autonomous driving by controlling various components. By way of example, computing devices202may navigate the vehicle to a destination location completely autonomously using data from the detailed map information and navigation system220. Computing devices202may use the positioning system222to determine the vehicle's location and perception system224to detect and respond to objects when needed to reach the location safely. In order to do so, computing devices202may cause the vehicle to accelerate (e.g., by increasing fuel or other energy provided to the engine by acceleration system214), decelerate (e.g., by decreasing the fuel supplied to the engine, changing gears, and/or by applying brakes by deceleration system212), change direction (e.g., by turning the front or rear wheels of vehicle100by steering system216), and signal such changes (e.g., by lighting turn signals of signaling system218). Thus, the acceleration system214and deceleration system212may be a part of a drivetrain that includes various components between an engine of the vehicle and the wheels228of the vehicle, such as transmission system230. Again, by controlling these systems, computing devices202may also control the drivetrain of the vehicle in order to maneuver the vehicle autonomously. Computing devices202may include all of the components normally used in connection with a computing device such as the processor and memory described above as well as a user interface240(e.g., a mouse, keyboard, touch screen and/or microphone) and various electronic displays (e.g., a monitor having a screen or any other electrical device that is operable to display information). In this regard, an internal electronic display may be located within a cabin of vehicle100(not shown) and may be used by computing devices202to provide information to passengers within the vehicle100. Also shown inFIG.2is a communication system242. The communication system242may also include one or more wireless network connections to facilitate communication with other computing devices, such as passenger computing devices within the vehicle, and computing devices external to the vehicle, such as in another nearby vehicle on the roadway, or a remote server system. The network connections may include short range communication protocols such as Bluetooth. Bluetooth low energy (LE), cellular connections, as well as various configurations and protocols including the Internet, World Wide Web, intranets, virtual private networks, wide area networks, local networks, private networks using communication protocols proprietary to one or more companies, Ethernet, WiFi and HTTP, and various combinations of the foregoing. Example Implementations In addition to the structures and configurations described above, various implementation aspects will now be described. Turning first toFIG.3, this figure illustrates a rotating sensor assembly300for use with the integrated architecture discussed herein. The rotating sensor assembly300may be a lidar sensor assembly, or may be a radar or camera sensor assembly, by way of example. As shown, the assembly300includes a housing310having a lens320disposed along a surface thereof. The housing310is rotatably coupled to a base340, which may be affixed to a part of a vehicle (e.g., roof, quarter panel, front or rear bumper, etc.), or which may be part of the vehicle itself. The housing310and is configured to rotate about an axis340, e.g., entirely about the axis300to provide a full panoramic 360° field of view, or less than a full panoramic view, e.g., between 1°-90° or 45°-180°. The axis may be in a vertical direction or otherwise perpendicular relative to the path of travel, may have a different angle, or be variable. In the case of a lidar sensor, as the housing310rotates one or more light beams350are emitted and exit the lens320to the environment external to the vehicle. Light reflected off of one or more objects in the environment return toward the lens320as reflected light360. The received reflected light360may be processed by the perception system and/or by the computing devices of the vehicle. In order to control and operate the rotating sensor assembly300, electrical power, torque and control signals need to be supplied to it, and received data needs to be provided by the assembly to the perception system or other components. As noted above, the integrated architecture employs a magnetic core that is shared by a motor, rotary transformer and wireless communication link such as an RF communications link. One example of a rotor assembly400according to aspects of the technology is shown inFIG.4A, and a cutaway view410is shown inFIG.4B. The rotor assembly may have different configurations and may be assembled in different ways depending on the type and arrangement of the sensor. For instance.FIGS.5A-Cillustrate a first core500having a circular body502with a central opening504and a series of teeth506extending from the circular body502. In this example, the teeth506are straight (non-flared). The core500is ferromagnetic, and is configured as both a stator backiron and transformer core. For instance, the core could be mostly iron and/or made of various ferrites as well as sintered powdered metal, which may contain more than 50% nickel. The central opening504is configured to fit a shaft, bearings and a communication link (not shown) such as an RF link. The circular body502also includes a receptacle region508disposed between the central opening504and the teeth506. The receptacle region508may be circular as shown, or may have another configuration. FIG.5Billustrates a first assembly step, in which a set of bobbins assemblies510are placed on the teeth506. Each bobbin assembly510includes a bobbin512of an insulating material (e.g., plastic) and motor wiring514wound around the bobbin512. The bobbin assemblies510may be pre-wound and slid onto the teeth506of the stator core500.FIG.5Cillustrates the stator core500with bobbin assemblies510disposed on each of the teeth506. FIG.6Aillustrate an alternative stator and transformer core600. As with the core500, the core600is ferromagnetic, includes the central opening and the receptacle region. In this case, the core600includes a series of angled or otherwise flared teeth606. The ferromagnetic core may be molded as a single piece out of ferrite powder. For instance, the core can be made from ferrite or other powdered moldable ferromagnetic powders. This can be accomplished with moldable or 3D printable ferromagnetic powder bearing plastics. The flared teeth606may provide for better motor efficiency; however, the straight teeth506may be easier to manufacture using the ferrite powder. As shown inFIG.6B, bobbin assemblies610are affixed to each of the flared teeth. Here, bobbins612may be molded onto the flared teeth, and motor wiring614may be wound around the bobbins612. FIG.7Ashows a transformer700is disposed in the receptacle region. This figure also shows communication link702disposed along the central opening. The transformer700and the communication link702may each comprise a coiled wire. The gauge and number of windings may depend on the overall size of the rotor assembly, the amount of power being supplied, the communication frequency, etc. By way of example only, the transformer700may be formed of 8 or 10 gauge wire, having 6-10 turns in the winding. The communication link702, such as to provide a radio frequency (RF) communication link, may be formed of 28-32 gauge wire, with only 1-2 turns. For each, thicker or thinner gauge wire may be used (e.g., 6-12 gauge or 24-40 gauge), and more or fewer turns may be employed (e.g., at least 3-4 turns or no more than 20-30 turns). The transformer may be prewound and inserted into the receptacle region. The wire for the communication link could be arranged as a free-standing coil, or may be wound on a bobbin and inserted into the central opening. And as shown inFIG.7B, the ends of the wires614from the rotor assembly and from the transformer700can be positioned to extend away from the back side of the core. Depending on the type of sensor and the application, it may be useful to pot the rotary transformer and RF windings of the communication link. Potting improves heat transfer, reduces noise and vibrations, and can improve reliability by protecting the windings/coils.FIGS.8A-Fillustrate an exemplary potting technique. In particular.FIGS.8A-Billustrate respective upper and lower halves of a potting mold. Prior to insertion of the assembled core, the facing surfaces of the mold may be sprayed or coated with a mold release for ease of disassembly. Then, the core is placed in the lower half of the mold as shown inFIG.8C. Next, the upper half of the mold is affixed to the lower half as shown inFIG.8D. The upper half of the mold includes a number of holes so that the various wires can extend out from the mold. These include a first hole800arranged to receive the communication link wire, a second hole802arranged to receive the ends of the wire for the transformer, and a series of holes804adapted to receive the ends of the motor wires. A central receptacle806is used to fasten the upper and lower halves together, e.g., with a screw or other fastening mechanism. Once the mold is secured with the ends of the wires extending through the holes, the potting compound fills the mold. Various potting compounds, such as epoxy resins, may be employed, so long as they are non-corrosive to the wiring, have adequate heat transfer properties and are electrically insulating. Low temperature plastic overmolding may also be employed. After the potting compound has cured, the potted core is removed from the mold. Front and rear views of an exemplary potted core are shown inFIGS.8Eand F, respectively. At this point, any extra potting material may be trimmed as necessary. After potting, magnetic field sensors such as Hall Effect or other sensors (e.g., magnetoresistive (MR) or giant magnetoresistive (GMR) sensors) are connected to the stator assembly as shown inFIG.9. Here, printed circuit board (PCB)900and Hall Effect sensor elements902are shown. By way of example, the Hall Effect sensor elements902may be digital sensors or 3 axis analog sensors (with a D/A converter). The PCB900may be configured to include motor and transformer controllers (not shown), which may comprise integrated circuits housing one or more processors. ASIC or other hardware-based logic devices. Other components, such as surface mounted temperature sensors (not shown), may also be arranged on the PCB900, e.g., to overly a motor trace to obtain motor temperature measurements. Similarly, a temperature sensor may be placed on the PCB adjacent to the transformer in order to monitor its temperature. One example of a stator-rotor assembly1000is shown inFIG.10A, with rotor1002encircling stator1004.FIG.10Bis an enlarged partial view of the assembly1000including rotor backiron1006, which has a generally circular arrangement. However, as shown in the figures, the backiron1006includes at least one protrusion1008, for keying the rotor1002. For instance, the protrusion(s)1008can be used to align the rotor in a rotor housing (not shown) of the rotating sensor assembly. This arrangement may be used for certain rotating sensors having a radial flux. Other configurations for rotating sensors with axial flux need not employ the keying feature. The rotor backiron1006also includes a series of magnets1010arranged to face the wound motor bobbins of the stator1004. In one example, the rotor1002can be made from a unitary piece of steel or a steel laminate. The rotor magnets1010could be, e.g., Neodymium iron (NdFe), samarium-cobalt (SmCo), a hard ferrite or other magnetic material. The rotor magnets1010are desirably located internally along the backiron1006as shown inFIG.10Bfor ease of assembly without gluing. This approach also helps to reduce the airgap flux density for the stator core. The stator-rotor assembly1000is configured to transfer power, torque and data (e.g., via RF communication between 2-50 MHz, or more or less) with the structures described above. The shared magnetic core provides the foundation for a simplified, efficient and powerful configuration that can be employed with various types of rotating sensors, including lidar, radar, optical or infrared cameras, etc. Unless otherwise stated, the foregoing examples are not mutually exclusive, but may be implemented in various combinations to achieve unique advantages. As these and other variations and combinations of the features discussed above can be utilized without departing from the subject matter defined by the claims, the foregoing description of the embodiments should be taken by way of illustration rather than by way of limitation of the subject matter defined by the claims. In addition, the provision of the examples described herein, as well as clauses phrased as “such as,” “including” and the like, should not be interpreted as limiting the subject matter of the claims to the specific examples; rather, the examples are intended to illustrate only one of many possible embodiments. Further, the same reference numbers in different drawings can identify the same or similar elements.
23,446
11860250
DETAILED DESCRIPTION OF POSSIBLE EMBODIMENTS FIG.6shows distortion of the output voltage Voutof the half-bridge sensing circuit100ofFIG.5as a function of the orientation of the external magnetic field60, for a magnitude of the external magnetic field60of 50 Oe, 500 Oe and 800 Oe. For high magnitudes of the external magnetic field60(at least greater than 50 Oe), finite magnetic stiffness of reference magnetization230causes a characteristic “triangular” distortion of the output signal Vout. The distortion results in an angular error in the orientation of the sense magnetization210relative to the orientation of the external magnetic field60. For an external magnetic field60of 50 Oe, the simulated curve corresponding to the output signal Vouthas a substantially sinusoidal shape and the angular error is minimal. For an external magnetic field60of 500 Oe and 600 Oe, the distortion becomes more pronounced resulting in a more “triangular” shape of the simulated curve of the output signal Vout. The angular error is also increased. FIG.7shows distortion of the output voltage Voutof the half-bridge sensing circuit100ofFIG.5as a function of the orientation of the external magnetic field60, for the angle between the easy axis of the in-plane uniaxial anisotropy in the sense layer21and the pinning direction231of the reference magnetization230. The shape of this type of distortion depends on how the anisotropy easy axis is oriented with respect to the pinning direction231. In particular, when the easy axis is oriented substantially parallel the pinning direction231(90°), the half-bridge output voltage Vouthas characteristic “rectangular” shape. When the easy axis is oriented substantially perpendicular to the pinning direction231(0°), the half-bridge output voltage Vouthas characteristic “triangular” shape. FIG.8shows distortion of the output voltage Voutof the half-bridge sensing circuit100ofFIG.5as a function of the orientation of the external magnetic field60, for a magnitude of the external magnetic field60of 130 Oe, 230 Oe and 780 Oe. The easy axis of magnetic anisotropy in sensing layer210is aligned with the reference layer pinning direction231. Here, the distortion is due to finite in-plane uniaxial magnetic anisotropy of the sense layer210. The distortion becomes more pronounced when the magnitude of the external magnetic field60decreases. In an embodiment, a “compensation” effect can be obtained by combining the “triangular” distortion of the output signal Voutdue to the finite magnetic stiffness of reference magnetization230, with the “rectangular” distortion when the easy axis of the sense magnetization210is oriented substantially parallel the pinning direction231of the reference magnetization230. The compensation effect corresponds to the two distortions that cancels each other and provides better sinusoidal shape of half-bridge output Vout. Since these two types of distortions have opposite dependence on the external magnetic field60, the compensation effect will be reached within certain magnitude range of the external magnetic field60. In an embodiment represented inFIG.9, the sense layer21comprises an in-plane uniaxial sense magnetic anisotropy having an easy axis211that is aligned substantially parallel to the pinning direction231, such that an angular deviation in the alignment of the direction of the sense magnetization210in the external magnetic field60is minimized for a range of external magnetic field60. In the example ofFIG.9, the in-plane uniaxial sense magnetic anisotropy of the sense layer21is provided by an elliptical shape of the sensor element20wherein the easy axis211corresponds to the long axis of the elliptical shape that is aligned substantially parallel to the pinning direction231. In an embodiment, a ratio of the magnitude of the sense magnetic anisotropy of the sense layer21over the pinning strength of the reference magnetization230in the pinning direction231is selected such as to adjust the range of external magnetic field60for which the compensation effect is obtained, i.e., for which the angular deviation is minimized. An example of such adjustment is shown onFIG.10. For example, such ratio can be selected to obtain a compensation effect that corresponds to the angular deviation (or error) being minimized within a predetermined range of external magnetic field60. For example, the ratio can be selected such that the angular deviation is equal or less than 0.5° within a range of external magnetic field60of about 200 Oe with the central working point of 800 Oe (FIG.10c). Simulations (seeFIG.10) show that for typical stiffness values of the reference magnetization230, such compensation effect can be obtained for a magnitude range of the external magnetic field60of about 200 Oe. In this magnitude range of the external magnetic field60, the angular error can be as low as 0.5°. Thus, aligning the anisotropy easy axis of the sense layer210substantially parallel to the pinning direction231of the reference magnetization230allows for having a low angular error, for example an angular error equal or lower than 0.5°, within a magnitude range of the external magnetic field Hextof about 200 Oe at any desired central working point. The desired central working point of the external magnetic field60for which the angular error is low can be selected by adjusting the strength of the anisotropy of the sense layer21with respect to the pinning strength of the reference magnetization230. The strength of the anisotropy of the sense layer21can be adjusted by adjusting the ellipticity of the magnetic sensor element20, in the case the sense magnetic anisotropy of the sense layer21is provided by an elliptical shape of the magnetic sensor element20. The ellipticity can be modified to take into account other sources of magnetic anisotropy in sense layer21(for example, growth-induced, stress-induced anisotropy etc.). FIGS.10ato10cshow simulations results of the angular deviation as a function of the external magnetic field for a strength Hkof the magnetic anisotropy of the sense layer21equal to 20 Oe (FIG.10a), 40 Oe (FIG.10b) and 60 Oe (FIG.10c). FIGS.10ato10cshow that an angular error equal or lower than 0.5° can be obtained for: an external magnetic field Hextbetween about 400 Oe and 700 Oe for a strength Hkof the magnetic anisotropy of the sense layer21equal to 20 Oe; an external magnetic field Hextbetween about 500 Oe and 800 Oe for a strength Hkof the magnetic anisotropy of the sense layer21equal to 40 Oe; and an external magnetic field Hextbetween about 700 Oe and 900 Oe for a strength Hkof the magnetic anisotropy of the sense layer21equal to 60 Oe. In a preferred embodiment, the plurality of sensor elements20are arranged in a half-bridge circuit, such as the half-bridge sensing circuit100shown inFIG.5, or a full bridge sensing circuit200shown inFIG.11. The full-bridge circuit200comprises two half-bridge circuits100, each producing an output voltage Voutbetween the two sensor elements20.FIG.12shows an example of a full 2D sensor device circuit300comprising two full-bridge circuits200and200′. The sensing axis250of one of the full bridge circuits200is oriented substantially orthogonal to the sensing axis250of the other full bridge circuit200′. One of the full-bridge circuits200produces a first output voltage Vout1between the two sensor elements20of each half-bridges. The other full-bridge circuit200′ produces a second output voltage Vout2between the two sensor elements20of each half-bridges. Magnetic field angle can be calculated applying arctangent function to ratio of measured voltages Vout1/Vout2. The configuration of the full 2D sensor device circuit300allows for sensing the external magnetic field60in all orientations in a plane (for example an external magnetic field60having a horizontal and vertical component in the page ofFIG.12). Magnetic anisotropy in the sensing layer21is adjusted by the ellipticity of the sensor element20. The sensing elements20comprised in the 2D sensors100,200,300have a sensibly equal magnetic anisotropy strength in the sensing layer210. In each sensing element20the easy axis211of anisotropy in the sensing layer21coincides with the pinning direction of reference layer230. Each branch of the half-bridge circuit100, the full bridge circuit200or the circuit300comprising two full bridges200as the one shown inFIG.11, comprises two sensor elements20, wherein the pinning direction231of the reference magnetization230in one sensor element20being oriented substantially orthogonal to the pinning direction231of the reference magnetization230in the other sensor element20. REFERENCE NUMBERS AND SYMBOLS 10magnetic angular sensor device, sensing circuit100half-bridge sensing circuit200full-bridge sensing circuit300full-bridge sensor circuit20magnetic sensor element21ferromagnetic sensing layer210sense magnetization211easy axis of the sense magnetic anisotropy22tunnel barrier layer23ferromagnetic reference layer230reference magnetization231pinning direction24pinning layer60, Hextexternal magnetic fieldHkstrength of magnetic anisotropyHudstrength of reference layer pinningVininput voltageVoutoutput voltageθ angle
9,220
11860251
DETAILED DESCRIPTION An object of the technology is to provide a magnetic sensor that can reduce the concentration of magnetic charges at the edge of a magnetic layer of a magnetoresistive element to expand a range where a detection signal changes linearly. In the following, some example embodiments and modification examples of the technology are described in detail with reference to the accompanying drawings. Note that the following description is directed to illustrative examples of the disclosure and not to be construed as limiting the technology. Factors including, without limitation, numerical values, shapes, materials, components, positions of the components, and how the components are coupled to each other are illustrative only and not to be construed as limiting the technology. Further, elements in the following example embodiments which are not recited in a most-generic independent claim of the disclosure are optional and may be provided on an as-needed basis. The drawings are schematic and are not intended to be drawn to scale. Like elements are denoted with the same reference numerals to avoid redundant descriptions. Note that the description is given in the following order. First Example Embodiment Example embodiments of the technology will now be described in detail with reference to the drawings. An outline of a magnetic sensor system including a magnetic sensor according to a first example embodiment of the technology will initially be described with reference toFIG.1. A magnetic sensor system100according to the present example embodiment includes a magnetic sensor1according to the present example embodiment and a magnetic field generator5. The magnetic field generator5generates a target magnetic field MF that is a magnetic field for the magnetic sensor1to detect (magnetic field to be detected). The magnetic field generator5is rotatable about a rotation axis C. The magnetic field generator5includes a pair of magnets6A and6B. The magnets6A and6B are arranged at symmetrical positions with a virtual plane including the rotation axis C at the center. The magnets6A and6B each have an N pole and an S pole. The magnets6A and6B are located in an orientation such that the N pole of the magnet6A is opposed to the S pole of the magnet6B. The magnetic field generator5generates the target magnetic field MF in the direction from the N pole of the magnet6A to the S pole of the magnet6B. The magnetic sensor1is located at a position where the target magnetic field MF at a predetermined reference position can be detected. The target magnetic field MF at the reference position is part of the magnetic fields generated by the respective magnets6A and6B. The reference position may be located on the rotation axis C. In the following description, the reference position is located on the rotation axis C. The magnetic sensor1detects the target magnetic field MF generated by the magnetic field generator5, and generates a detection value Vs. The detection value Vs has a correspondence with a relative position, or rotational position in particular, of the magnetic field generator5with respect to the magnetic sensor1. The magnetic sensor system100can be used as a device for detecting the rotational position of a rotatable moving part in an apparatus that includes the moving part. Examples of such an apparatus include a joint of an industrial robot.FIG.1shows an example where the magnetic sensor system100is applied to an industrial robot200. The industrial robot200shown inFIG.1includes a moving part201and a support unit202that rotatably supports the moving part201. The moving part201and the support unit202are connected at a joint. The moving part201rotates about the rotation axis C. For example, if the magnetic sensor system100is applied to the joint of the industrial robot200, the magnetic sensor1may be fixed to the support unit202, and the magnets6A and6B may be fixed to the moving part201. Now, we define X, Y, and Z directions as shown inFIG.1. The X, Y, and Z directions are orthogonal to one another. In the present example embodiment, a direction parallel to the rotation axis C (inFIG.1, a direction out of the plane of the drawing) will be referred to as the X direction. InFIG.1, the Y direction is shown as a rightward direction, and the Z direction is shown as an upward direction. The opposite directions to the X, Y, and Z directions will be referred to as −X, −Y, and −Z directions, respectively. As used herein, the term □above□refers to positions located forward of a reference position in the Z direction, and □below□refers to positions located on a side of the reference position opposite to □above□. In the present example embodiment, the direction of the target magnetic field MF at the reference position is expressed as a direction within the YZ plane including the reference position on the rotation axis C. The direction of the target magnetic field MF at the reference position rotates about the reference position within the foregoing YZ plane. The magnetic sensor1includes magnetoresistive elements (hereinafter, referred to as MR elements) whose resistances change with an external magnetic field. In the present example embodiment, the resistances of the MR elements change with a change in the direction of the target magnetic field MF. The magnetic sensor1generates detection signals corresponding to the resistances of the MR elements, and generates a detection value Vs based on the detection signals. Next, a configuration of the magnetic sensor1according to the present example embodiment will be described. An example of a circuit configuration of the magnetic sensor1will initially be described with reference toFIG.2. In the example shown inFIG.2, the magnetic sensor1includes four resistor sections11,12,13, and14, two power supply nodes V1and V2, two ground nodes G1and G2, and two signal output nodes E1and E2. The resistor sections11to14each include at least one MR element30. If each of the resistor sections11to14includes a plurality of MR elements30, the plurality of MR elements30in each of the resistor sections11to14may be connected in series. The resistor section11is provided between the power supply node V1and the signal output node E1. The resistor section12is provided between the signal output node E1and the ground node G1. The resistor section13is provided between the power supply node V2and the signal output node E2. The resistor section14is provided between the signal output node E2and the ground node G2. The power supply nodes V1and V2are configured to receive a power supply voltage of predetermined magnitude. The ground nodes G1and G2are connected to the ground. The potential of the connection point between the resistor section11and the resistor section12changes depending on the resistance of the at least one MR element30of the resistor section11and the resistance of the at least one MR element30of the resistor section12. The signal output node E1outputs a signal corresponding to the potential of the connection point between the resistor section11and the resistor section12as a detection signal S1. The potential of the connection point between the resistor section13and the resistor section14changes depending on the resistance of the at least one MR element30of the resistor section13and the resistance of the at least one MR element30of the resistor section14. The signal output node E2outputs a signal corresponding to the potential of the connection point between the resistor section13and the resistor section14as a detection signal S2. The magnetic sensor1further includes a detection value generation circuit21that generates the detection value Vs on the basis of the detection signals S1and S2. The detection value generation circuit21includes an application specific integrated circuit (ASIC) or a microcomputer, for example. Next, the configuration of the magnetic sensor1will be described in more detail with attention focused on one MR element30.FIG.3is a schematic diagram showing a part of the magnetic sensor1.FIG.4is a cross-sectional view showing a part of the magnetic sensor1.FIG.4shows a cross section parallel to the YZ plane and intersecting the MR element30.FIG.5is a plan view showing a part of the magnetic sensor1. The magnetic sensor1further includes a support member60. The support member60supports all the MR elements30included in the resistor sections11to14. As shown inFIGS.3and4, the support member60includes an opposed surface60aopposed, at least in part, to the MR elements30, and a bottom surface60blocated opposite the opposed surface60a. The opposed surface60ais located at an end of the support member60in the Z direction. The bottom surface60bis located at an end of the support member60in the −Z direction. The bottom surface60bis parallel to the XY plane. The bottom surface60bcorresponds to the reference plane in the technology. For example, the magnetic sensor1may be manufactured with the bottom surface60bor a surface corresponding to the bottom surface60bmade horizontal. For example, the magnetic sensor1may be installed based on the direction or tilt of the bottom surface60bor the surface corresponding to the bottom surface60b. The bottom surface60bmay thus serve as a reference plane in at least either the manufacturing or the installing of the magnetic sensor1. At least a part of the opposed surface60aof the support member60is inclined relative to the reference plane, i.e., the bottom surface60b. In the present example embodiment, the opposed surface60aincludes a flat portion60a1parallel to the bottom surface60band at least one curved portion60a2not parallel to the bottom surface60b. As shown inFIG.4, the curved portion60a2is a convex surface protruding in a direction away from the bottom surface60b. The curved portion60a2has a curved shape (arch shape) curved to protrude in a direction away from the bottom surface60b(Z direction) in a given cross section parallel to the YZ plane. In a given cross section parallel to the YZ plane, the distance from the bottom surface60bto the curved portion60a2is maximized at the center of the curved portion60a2in a direction parallel to the Y direction (hereinafter, referred to simply as the center of the curved portion60a2). The curved portion60a2extends along the X direction. As shown inFIG.3, the overall shape of the curved portion60a2is a semicylindrical curved surface formed by moving the curved shape (arch shape) shown inFIG.4along the X direction. At least a part of the MR element30is located on the curved portion60a2. A portion of the curved portion60a2from an edge at the end of the curved portion60a2in the −Y direction to the center of the curved portion60a2will be referred to as a first inclined surface and be denoted by the reference symbol SL1. A portion of the curved portion60a2from an edge at the end of the curved portion60a2in the Y direction to the center of the curved portion60a2will be referred to as a second inclined surface and be denoted by the reference symbol SL2. InFIG.3, the border between the first inclined surface SL1and the second inclined surface SL2is shown by a dotted line. Both the first and second inclined surfaces SL1and SL2are inclined relative to the reference plane, i.e., the bottom surface60b. In the present example embodiment, the entire MR element30is located on the first inclined surface SL1or the second inclined surface SL2.FIGS.3and4show how the MR element30is located on the first inclined surface SL1. The MR element30has a shape that is long in the X direction. As employed herein, the lateral direction of the MR element30will be referred to as the width direction of the MR element30or simply as the width direction. The MR element30may have a planar shape (shape seen in the Z direction), like a rectangle, including a constant width portion having a constant or substantially constant width in the width direction regardless of the position in the X direction. The MR element30may have a planar shape including no constant width portion, like an ellipse. Examples of the planar shape of the MR element30including a constant width portion include a rectangular shape where both longitudinal ends are straight, an oval shape where both longitudinal ends are semicircular, and a shape where both longitudinal ends are polygonal.FIG.3shows an example where the MR element30has a rectangular planar shape. In a second modification example to be described later, the MR element30will be described to have an oval planar shape. The MR element30has a width that is a dimension in the direction parallel to the Y direction. This dimension of the MR element30in the width direction is constant or substantially constant regardless of the position in the X direction. The support member60includes a substrate61and an insulating layer62located on the substrate61. The substrate61is a semiconductor substrate made of a semiconductor such as Si, for example. The substrate61has a top surface located at an end of the substrate61in the Z direction, and a bottom surface located at an end of the substrate61in the −Z direction. The bottom surface60bof the support member60is constituted by the bottom surface of the substrate61. The substrate61has a constant thickness (dimension in the Z direction). The insulating layer62is made of an insulating material such as SiO2, for example. The insulating layer62includes a top surface located at an end in the Z direction. The opposed surface60aof the support member60is constituted by the top surface of the insulating layer62. The insulating layer62has a cross-sectional shape such that the curved surface portion60a2is formed on the opposed surface60a. Specifically, the insulating layer62has a cross-sectional shape of bulging out in the Z direction in a given cross section parallel to the YZ plane. The magnetic sensor1further includes a lower electrode41, an upper electrode42, and insulating layers63,64and65. InFIG.3, the lower electrode41, the upper electrode42, and the insulating layers63to65are omitted. InFIG.5, the insulating layers63to65are omitted. The lower electrode41is located on the opposed surface60aof the support member60(the top surface of the insulating layer62). The insulating layer63is located on the opposed surface60aof the support member60, around the lower electrode41. The MR element30is located on the lower electrode41. The insulating layer64is located on the lower electrode41and the insulating layer63, around the MR element30. The upper electrode42is located on the MR element30and the insulating layer64. The insulating layer65is located on the insulating layer64, around the upper electrode42. The magnetic sensor1further includes a non-shown insulating layer covering the upper electrode42and the insulating layer65. The lower electrode41and the upper electrode42are made of a conductive material such as Cu, for example. The insulating layers63to65and the non-shown insulating layer are made of an insulating material such as SiO2, for example. The substrate61and the portions of the magnetic sensor1stacked on the substrate61are referred to collectively as a detection unit.FIG.4can be said to show the detection unit. The detection value generation circuit21shown inFIG.2may be integrated with or separate from the detection unit. Now, the configuration of the MR element30will be described in detail with reference toFIG.6. In particular, in the present example embodiment, the MR element30is a spin-valve MR element of current perpendicular-to-plane (CPP) structure. As shown inFIG.6, the MR element30includes a magnetization pinned layer32having a magnetization whose direction is fixed, a free layer34having a magnetization whose direction is variable depending on the direction of an external magnetic field, and a spacer layer33located between the magnetization pinned layer32and the free layer34. The MR element30may be a tunneling magnetoresistive (TMR) element or a giant magnetoresistive (GMR) element. In the TMR element, the spacer layer33is a tunnel barrier layer. In the GMR element, the spacer layer33is a nonmagnetic conductive layer. The resistance of the MR element30changes with an angle that the direction of the magnetization of the free layer34forms with respect to the direction of the magnetization of the magnetization pinned layer32. The resistance is minimized if the angle is 0°. The resistance is maximized if the angle is 180°. The magnetization pinned layer32, the spacer layer33, and the free layer34are stacked in this order from the lower electrode41in the direction toward the upper electrode42. The MR element30further includes an underlayer31interposed between the magnetization pinned layer32and the lower electrode41, and a cap layer35interposed between the free layer34and the upper electrode42. The arrangement of the magnetization pinned layer32, the spacer layer33, and the free layer34in the MR element30may be vertically reversed from that shown inFIG.6. The direction of the magnetization of the magnetization pinned layer32is desirably orthogonal to the longitudinal direction of the MR element30. In the present example embodiment, the MR element30is located on the first inclined surface SL1or the second inclined surface SL2inclined relative to the bottom surface60b. The direction of the magnetization of the magnetization pinned layer32is thus also inclined relative to the bottom surface60b. For the sake of convenience, in the present example embodiment, the direction of the magnetization of the magnetization pinned layer32located on the first inclined surface SL1will be referred to as a U direction or a −U direction. The U direction is a direction rotated from the Y direction toward the Z direction by a predetermined angle. The −U direction is the direction opposite to the U direction. For the sake of convenience, in the present example embodiment, the direction of the magnetization of the magnetization pinned layer32located on the second inclined surface SL2will be referred to as a V direction or a −V direction. The V direction is a direction rotated from the Y direction toward the −Z direction by a predetermined angle. The −V direction is the direction opposite to the V direction. The X, U and V directions are shown inFIG.2. For the sake of convenience, inFIG.2, the U direction and the V direction are indicated by the same arrow. InFIG.2, the filled arrows indicate the directions of the magnetizations of the magnetization pinned layers32of the MR elements30included in the respective resistor sections11to14. The magnetic sensor1may be configured so that the directions of the magnetizations of the magnetization pinned layers32of the MR elements30in the resistor sections11and14are the U direction, and the directions of the magnetizations of the magnetization pinned layers32of the MR elements30in the resistor sections12and13are the −U direction. Alternatively, the magnetic sensor1may be configured so that the directions of the magnetizations of the magnetization pinned layers32of the MR elements30in the resistor sections11and14are the V direction, and the directions of the magnetizations of the magnetization pinned layers32of the MR elements30in the resistor sections12and13are the −V direction. Alternatively, the magnetic sensor1may include a first circuit portion and a second circuit portion each including the resistor sections11to14. The first circuit portion may be configured so that the directions of the magnetizations of the magnetization pinned layers32of the MR elements30in the resistor sections11and14are the U direction, and the directions of the magnetizations of the magnetization pinned layers32of the MR elements30in the resistor sections12and13are the −U direction. The second circuit portion may be configured so that the directions of the magnetizations of the magnetization pinned layers32of the MR elements30in the resistor sections11and14are the V direction, and the directions of the magnetizations of the magnetization pinned layers32of the MR elements30in the resistor sections12and13are the −V direction. The free layer34corresponds to a magnetic layer according to the technology. The free layer34has magnetic shape anisotropy where the direction of the easy axis of magnetization intersects the direction of the magnetization of the magnetization pinned layer32. In the present example embodiment, the MR element30is patterned to a shape that is long in the X direction. This gives the free layer34magnetic shape anisotropy where the direction of the easy axis of magnetization is parallel to the X direction. Up to this point, the configuration of the magnetic sensor1has been described with attention focused on one MR element30. In the present example embodiment, the resistor sections11to14each include at least one MR element30. The magnetic sensor1thus includes a plurality of MR elements30, a plurality of lower electrodes41, and a plurality of upper electrodes42. As shown inFIG.5, each of the lower electrodes41has a long slender shape. The MR element30is provided on the top surface of the lower electrode41, near one end in the longitudinal direction. Each upper electrode42has a long slender shape and is located over two lower electrodes41to electrically connect two adjoining MR elements30. The number of curved portions60a2of the opposed surface60aof the support member60may be one or more than one. If the number of curved portions60a2is one, the plurality of MR elements30are located on the one curved portion60a2. In such a case, the plurality of MR elements30may be located on either one of the first and second inclined surfaces SL1and SL2or on both the first and second inclined surfaces SL1and SL2. If the number of curved portions60a2is more than one, one or a plurality of MR elements30may be located on one curved portion60a2. In such a case, the plurality of curved portions60a2may be arranged along one direction. Alternatively, the plurality of curved portions60a2may be arranged in a plurality of rows, i.e., more than one curved portion60a2in both the X and Y directions. Next, the MR element30will be described in more detail with reference toFIGS.6and7.FIG.7is an explanatory diagram for describing the shape of the free layer34.FIG.7is an enlarged view of a part of the cross section shown inFIG.4. InFIG.7, the underlayer31and the cap layer35of the MR element30are omitted. As shown inFIGS.6and7, the free layer34includes a first surface34a, a second surface34bopposite to the first surface34a, and an outer peripheral surface connecting the first surface34aand the second surface34b. The first surface34ais located farther from the opposed surface60aof the support member60than is the second surface34b. The first surface34ais in contact with the cap layer35. The second surface34bis in contact with the spacer layer33. In the present example embodiment, the MR element30is patterned to a shape that is long in the X direction. The first and second surfaces34aand34bthus each have a shape that is long in the X direction. The first surface34ahas a first edge Ed1and a second edge Ed2located at both lateral ends of the first surface34a. At least either one of the first and second edges Ed1and Ed2is located above the curved portion60a2of the opposed surface60aof the support member60. At least a part of the first surface34ais thus inclined relative to the reference plane, i.e., the bottom surface60bof the support member60. As employed herein, an angle that the first surface34aforms with the bottom surface60bof the support member60will be referred to as an inclination angle and be denoted by the symbol θ. The inclination angle θ is 0° or greater and not greater than 90°. At least a part of the first surface34ais inclined relative to the bottom surface60bof the support member60so that the inclination angle θ is greater than 0°. The shape of the free layer34can change discontinuously and greatly near the outer peripheral surface. To accurately define the inclination angle θ, in the present example embodiment, both lateral ends of the portion of the first surface34a, not including discontinuously and greatly changing areas, will be referred to, for the sake of convenience, as the first and second edges Ed1and Ed2. The first edge Ed1and the second edge Ed2may be located inside the first surface34a, inside the border between the first surface34aand the outer peripheral surface. If the shape of the free layer34does not change discontinuously, the first edge Ed1and the second edge Ed2fall on the border between the first surface34aand the outer peripheral surface. In the present example embodiment, both the first and second edges Ed1and the Ed2are located above the first inclined surface SL1of the curved portion60a2or both the first and second edges Ed1and Ed2are located above the second inclined surface SL2of the curved portion60a2. The entire first surface34ais thus inclined relative to the reference plane, i.e., the bottom surface60bof the support member60. The distance from the bottom surface60bof the support member60to the first edge Ed1is smaller than the distance from the bottom surface60bof the support member60to the second edge Ed2. FIG.7shows a cross section intersecting the free layer34and perpendicular to the longitudinal direction of the first surface34a(direction parallel to the X direction). Such a cross section will hereinafter be denoted by the symbol S. The cross section S is also a cross section parallel to the YZ plane. The inclination angle θ at the first edge Ed1will be referred to as an inclination angle θ1. The inclination angle θ at the second edge Ed2will be referred to as an inclination angle θ2. The inclination angle θ at a predetermined point P on the first surface34abetween the first edge Ed1and the second edge Ed2will be denoted by the symbol θp. In a given cross section S, the inclination angle θ1at the first edge Ed1is greater than the inclination angle θp at the predetermined point P. In the given cross section S, the inclination angle θ2at the second edge Ed2is smaller than the inclination angle θp. As shown inFIG.7, in the given cross section S, the inclination angle θ increases toward the first edge Ed1from the second edge Ed2. InFIG.7, the predetermined point P refers to the midpoint between the first and second edges Ed1and Ed2on the first surface34ain the given cross section S. The inclination angle θ at a given position on the first surface34achanges depending on the angle that the opposed surface60aof the support member60forms with the reference plane, i.e., the bottom surface60bof the support member60(hereinafter, referred to as the inclination angle of the opposed surface60a). Specifically, the inclination angle θ at a given position on the first surface34ais substantially the same as the inclination angle of the opposed surface60aat the position on the opposed surface60aclosest to the given position. The inclination angle θ thus increases as the inclination angle of the opposed surface60aincreases. The free layer34has a thickness T that is a dimension in a direction perpendicular to the first surface34a. The thickness T can also be said to be the distance between the first and second surfaces34aand34bin the direction perpendicular to the first surface34a. The thickness T at the first edge Ed1will be referred to as a thickness T1. The thickness T at the second edge Ed2will be referred to as a thickness T2. The thickness T at the predetermined point P will be referred to as a thickness Tp. For the sake of convenience, an imaginary surface is assumed by extending the second surface34balong the curved portion60a2, and the thickness T2is defined as the distance between the first surface34aand the imaginary surface in the direction perpendicular to the first surface34a. In a given cross section S, the thickness T1at the first edge Ed1is smaller than the thickness Tp at the predetermined point P. In the given cross section S, the thickness T2at the second edge Ed2is greater than the thickness Tp. As shown inFIG.7, in the given cross section S, the thickness T decreases toward the first edge Ed1from the second edge Ed2. The thickness T at a given position on the first surface34achanges depending on the inclination angle of the opposed surface60a. Specifically, the thickness T at a given position on the first surface34adecreases as the inclination angle of the opposed surface60aat the position on the opposed surface60aclosest to the given position increases. From the relationship between the inclination angle θ and the inclination angle of the opposed surface60aand the relationship between the thickness T and the inclination angle of the opposed surface60a, the thickness T decreases as the inclination angle θ increases. In the present example embodiment, the entire MR element30is located on the first inclined surface SL1or the second inclined surface SL2. The angle that the first inclined surface SL1or the second inclined surface SL2forms with the bottom surface60bof the support member60will hereinafter be referred to as an inclined surface angle and be denoted by the symbol ϕ. As shown inFIG.7, the inclination angle θ at a given position on the first surface34aincreases as the inclined surface angle ϕ at the position on the opposed surface60aclosest to the given position increases. As shown inFIG.7, the thickness T at a given position on the first surface34adecreases as the inclined surface angle ϕ at the position on the opposed surface60aclosest to the given position increases. InFIG.7, the inclined surface angle ϕ at a position on the opposed surface60aclosest to the first edge Ed1is denoted by the symbol ϕ1. The inclined surface angle ϕ at a position on the opposed surface60aclosest to the second edge Ed2is denoted by the symbol ϕ2. The inclined surface angle ϕ at a position on the opposed surface60aclosest to the predetermined point P is denoted by the symbol ϕp. The angle ϕ in a given cross section S is greater at the position on the opposed surface60aclosest to the first edge Ed1than at the position on the opposed surface60aclosest to the predetermined point P. In other words, the angle ϕ1is greater than the angle ϕp. The angle ϕ2is smaller than the angle ϕp. As shown inFIG.7, the angle ϕ in the given cross section S increases toward the position on the opposed surface60aclosest to the first edge Ed1from the position on the opposed surface60aclosest to the second edge Ed2. Examples of the thickness T and the inclined surface angle ϕ will now be described. The following description will be given by using a case where a TMR element was formed as an MR element30of a practical example on the first inclined surface SL1, as an example. In this example, the TMR element was formed by using a magnetron sputtering apparatus, and the thickness T of the free layer34of the MR element30(TMR element) was measured under a cross-sectional transmission electron microscope (cross-sectional TEM). In the MR element30(TMR element) of the practical example, the distance from the first edge Ed1to the second edge Ed2in a cross section parallel to the YZ plane was 1.3 μm. In the practical example, the thickness T1at the first edge Ed1was 9.0 nm. The inclined surface angle ϕ1at the position on the opposed surface60aclosest to the first edge Ed1was 39.1°. In the practical example, the thickness T2at the second edge Ed2was 10.9 nm. The inclined surface angle ϕ2at the position on the opposed surface60aclosest to the second edge Ed2was 25.2°. In actually fabricating the MR element30, the first surface34aof the free layer34can have so high a surface roughness that effects on various parameters are not negligible. In such a case, to reduce measurement errors, inclination angles θ including the inclination angles θ1, θ2, and θp may be measured in the following manner. Initially, determine average lines (straight lines) of the cross-sectional curve of the first surface34anear the respective measurement points of the inclination angles θ. Then, measure the angles that the average lines form with the bottom surface60bof the support member60as the inclination angles θ at the measurement points by assuming the average lines as the tangents to the first surface34aat the measurement points. The average lines desirably have such a length that the average lines intersect the cross-sectional curve a plurality of times. For example, in the case of the MR element30(TMR element) according to the practical example, the average lines may have a length in the range of 10 to 100 nm. Such an angle measurement method may be employed as the specific definition of the inclination angles θ in the present example embodiment. To reduce measurement errors, the thicknesses T at the measurement points may be measured by assuming the directions perpendicular to the foregoing average lines as the directions perpendicular to the first surface34a. Alternatively, if the opposed surface60aincluding the curved portion60a2has a lower surface roughness than that of the first surface34a, the thicknesses T at the measurement points may be measured by assuming the directions perpendicular to the opposed surface60aat the positions on the opposed surface60aclosest to the measurement points as the directions perpendicular to the first surface34a. Either one of the foregoing methods for measuring the thickness T may be employed as the specific definition of the thickness T in the present example embodiment. Next, operation and effects of the magnetic sensor1according to the present example embodiment will be described. In the present example embodiment, in a given cross section S, the thickness T1at the first edge Ed1is smaller than the thickness Tp at the predetermined point P. Moreover, in the present example embodiment, the thickness T2at the second edge Ed2is greater than the thickness Tp in the given cross section S. According to the present example embodiment, the concentration of magnetic charges at and near the first edge Ed1of the free layer34can thus be reduced. In the present example embodiment, in a given cross section S, the inclination angle θ1at the first edge Ed1is greater than the inclination angle θp at the predetermined point P. Moreover, in the given cross section S, the inclination angle θ2at the second edge Ed2is smaller than the inclination angle θp at the predetermined point P. The inclination angle θ is substantially the same as the inclination angle of the opposed surface60a, and can be controlled by changing the position of the MR element30and/or the inclination angle itself of the opposed surface60a. As described above, the thickness T decreases as the inclination angle of the opposed surface60aincreases. Such a relationship between the thickness T and the inclination angle of the opposed surface60acan be achieved by forming the MR element30using a so-called non-conformal film formation apparatus such as a magnetron sputtering apparatus. The inclination angles θ can be controlled by the inclination angle of the opposed surface60aand the arrangement of the MR element30. According to the present example embodiment, the thickness T can be controlled by controlling the inclination angles θ as described above. The effect of reducing the concentration of magnetic charges will be described in detail below by comparison with an MR element230according to a comparative example. The MR element230of the comparative example will initially be described with reference toFIG.8.FIG.8is an explanatory diagram for describing magnetic charges on the MR element230of the comparative example.FIG.8shows a cross section corresponding to the cross section S. Like the MR element30according to the present example embodiment, the MR element230according to the comparative example includes a magnetization pinned layer232, a spacer layer233, a free layer234, and a not-shown underlayer and cap layer. The MR element230of the comparative example is located on a flat surface parallel to the reference plane (bottom surface60bof the support member60). Like the MR element30according to the present example embodiment, the MR element230is patterned to a shape that is long in the X direction. This gives the free layer234magnetic shape anisotropy where the direction of the easy axis of magnetization is parallel to the X direction. The free layer234includes a first surface234alocated at an end in the Z direction, a second surface234bopposite to the first surface234a, and an outer peripheral surface connecting the first surface234aand the second surface234b. Both the first and second surfaces234aand234bare flat surfaces parallel to the reference plane. The first and second surfaces234aand234beach have a shape that is long in the X direction. The first surface234ahas a first edge Ed11and a second edge Ed12located at both ends in the lateral direction of the first surface234a, i.e., a direction parallel to the Y direction. In particular, in the comparative example, the first edge Ed11is an edge located at the end of the first surface234ain the −Y direction. The second edge Ed12is an edge located at the end of the first surface234ain the Y direction. If an external magnetic field is applied to the MR element230, the direction of the magnetic moment inside the free layer234rotates depending on the direction and strength of the external magnetic field. As a result, the direction of the magnetization of the free layer234rotates. Here, magnetic charges occur on the outer peripheral surface of the free layer234. Now, suppose that an external magnetic field in the Y direction is applied to the MR element230. If the external magnetic field in the Y direction is applied, positive magnetic charges concentrate at a portion of the outer peripheral surface of the free layer234near the second edge Ed12, and negative magnetic charges concentrate at a portion of the outer peripheral surface of the free layer234near the first edge Ed11. InFIG.8, the symbols “+” represent positive magnetic charges, and the symbols “−” negative magnetic charges. A demagnetizing field in the −Y direction occurs in the free layer234due to such magnetic charges. The strength of the demagnetizing field is higher as it is closer to the magnetic charges. The strength of the demagnetizing field in the portions of the free layer234near the first and second edges Ed11and Ed12is therefore high. The strength of the demagnetizing field in the midsection of the free layer234is low. If no external magnetic field is applied, the direction of the magnetization of the free layer234and the direction of the magnetic moment in the free layer234are parallel to the X direction. If the strength of the external magnetic field is low, the direction of the magnetic moment in the midsection of the free layer234starts to rotate toward the Y direction. On the other hand, the direction of the magnetic moment in the portions of the free layer234near the first and second edges Ed11and Ed12does not rotate or hardly rotates. If the strength of the external magnetic field becomes high to a certain extent, the direction of the magnetic moment in the midsection of the free layer234becomes the same or substantially the same as the Y direction. Meanwhile, the direction of the magnetic moment in the portions of the free layer234near the first and second edges Ed11and Ed12starts to rotate toward the Y direction. If the strength of the external magnetic field becomes even higher, the direction of the magnetic moment in the portions of the free layer234near the first and second edges Ed11and Ed12also becomes the same or substantially the same as the Y direction. As described above, in the MR element230of the comparative example, the direction of the magnetic moment in the entire free layer234does not change uniformly because of the demagnetizing field. As a result, the magnetization of the free layer234changes nonlinearly with respect to a change in the strength of the external magnetic field. Consequently, a detection signal generated by a magnetic sensor including the MR element230of the comparative example changes nonlinearly with respect to a change in the strength of the external magnetic field. Next, magnetic charges on the MR element30according to the present example embodiment will be described.FIG.9is an explanatory diagram for describing magnetic charges on the MR element30.FIG.9shows a cross section corresponding to the cross section S. InFIG.9, the symbols “+” represent positive magnetic charges, and the symbols “−” negative magnetic charges. In the MR element30according to the present example embodiment, the thickness T1at the first edge Ed1is smaller than the thickness T2at the second edge Ed2. Now, suppose that an external magnetic field in the Y direction is applied to the MR element30. In such a case, positive magnetic charges concentrate at a portion of the outer peripheral surface of the free layer34near the second edge Ed2as in the comparative example. By contrast, negative magnetic charges do not concentrate at a portion of the outer peripheral surface of the free layer34near the first edge Ed1but are distributed even over the first surface34a. This reduces a difference between the strength of the demagnetizing field at the portion of the free layer34near the first edge Ed1and that of the demagnetizing field in the midsection of the free layer34. As the difference decreases, the direction of the magnetic moment at the portion of the free layer34near the first edge Ed1rotates more similarly to that of the magnetic moment in the midsection of the free layer34. According to the present example embodiment, the magnetization of the free layer34can thus be prevented from changing nonlinearly with respect to a change in the strength of the external magnetic field. As a result, according to the present example embodiment, the range where the detection signal generated by the magnetic sensor1change linearly can be expanded. Next, a result of an experiment for examining the linearity of the detection signal will be described. For the experiment, a magnetic sensor of the practical example and a magnetic sensor of the comparative example were fabricated. The magnetic sensor of the practical example and the magnetic sensor of the comparative example each have basically the same configuration as that of the magnetic sensor1according to the present example embodiment. The magnetic sensor of the practical example includes MR elements30(TMR elements) according to the foregoing practical example as the MR elements30. The magnetic sensor of the comparative example includes MR elements230according to the comparative example instead of the MR elements30. The MR elements230according to the comparative example are TMR elements formed on a flat surface parallel to the reference plane (bottom surface60bof the support member60) by the same method as with the MR elements30according to the practical example. In the experiment, changes in a detection signal (signal corresponding to the detection signal S1or S2) generated by each of the magnetic sensors of the practical example and the comparative example were examined while changing the strength of the external magnetic field in the Y direction applied to the magnetic sensors. FIG.10shows the results of the experiment. Here, the strength of the external magnetic field applied to the magnetic sensors is expressed by H, and the strength of the magnetic anisotropy fields in the free layers34and234is expressed by Hk. The horizontal axis ofFIG.10indicates H/Hk. The vertical axis ofFIG.10indicates normalized signals obtained by normalizing the detection signals to a maximum value of 1. InFIG.10, the curve denoted by the reference numeral81represents the normalized signal of the magnetic sensor according to the practical example. The curve denoted by the reference numeral82represents the normalized signal of the magnetic sensor according to the comparative example. As shown inFIG.10, the normalized signal of the magnetic sensor (reference numeral82) according to the comparative example changes linearly within the range where H/Hk is 0 to 0.7. The normalized signal of the magnetic sensor (reference numeral81) according to the practical example changes linearly within the range where H/Hk is 0 to 0.8. As can be seen fromFIG.10, according to the present example embodiment, the range where the detection signals generated by the magnetic sensor1change linearly can be expanded. As shown inFIG.8, the end faces of the MR element230in the −Y direction and the Y direction of the comparative example are each tilted relative to the XY plane. To reduce the concentration of magnetic charges at the portions of the outer peripheral surface of the free layer234near the first and second edges Ed11and Ed12in the MR element230of the comparative example, the foregoing end faces can be tilted more greatly. However, the effect of increasing the tilt of the end faces as described above is limited since the MR element typically has a small thickness. Moreover, the MR element230of the comparative example causes the following problems if the tilt of the end faces is increased. That is, increasing the tilt of the end faces increases regions not covered with the cap layer when seen in the Z direction, and the MR element230can become more prone to corrosion and oxidation. The free layer234is sometimes made of a layered film including a plurality of layers. In such a case, increasing the tilt of the end faces as described above reduces the areas of the layers of the layered film closer to the cap layer, and can change the properties of the free layer234at the edges. Moreover, in forming a plurality of MR elements230, if the tilt of the end faces is increased with the shape of the resist mask unchanged, the width of the tilted part of each of the MR elements230is increased. As a result, the distance between the two adjoining MR elements230decreases. The distance between the two adjoining MR elements230need to be increased to reduce a risk of the two adjoining MR elements230being electrically connected. However, an increase in the distance between the two adjoining MR elements230lowers the integration density of the plurality of MR elements230and results in a decrease in an S/N ratio. By contrast, in the present example embodiment, the concentration of magnetic charges at the portion of the outer peripheral surface of the free layer34near the first edge Ed1can be reduced without increasing the tilt of the end faces of the MR element30in the −Y direction and the Y direction as described above. In other words, according to the present example embodiment, the concentration of magnetic charges can be reduced while preventing the occurrence of the problems due to the increased tilt of the end faces of the MR element30. Moreover, in the present example embodiment, the concentration of magnetic charges can easily be reduced by forming the MR element30so that at least a part of the MR element30is located on the curved portion60a2of the opposed surface60a. The present example embodiment has dealt with the case where the MR element30is located on the curved portion60a2. However, the MR element30may be located on the following inclined portion. The inclined portion includes a plurality of flat surfaces. Of the plurality of flat surfaces, the one closest to the bottom surface60bof the support member60will be referred to as a first flat surface. The flat surface farthest from the bottom surface60bof the support member60will be referred to as a second flat surface. The MR element30is located across the first flat surface and the second flat surface. An angle that the first flat surface forms with the bottom surface60bof the support member60is greater than angles that the respective flat surfaces other than the first flat surface form with the bottom surface60bof the support member60. The angle that the second flat surface forms with the bottom surface60bof the support member60is smaller than the angles that the respective flat surfaces other than the second flat surface form with the bottom surface60bof the support member60. The present example embodiment has dealt with the case where the entire MR element30is located on the first inclined surface SL1or the second inclined surface SL2of the curved portion60a2. However, as will be described in a second example embodiment, the MR element30may be located across the first inclined surface SL1and the second inclined surface SL2. The present example embodiment has also dealt with the case where both the first and second edges Ed1and Ed2are located above the first inclined surface SL1or both the first and second edges Ed1and Ed2are located above the second inclined surface SL2. However, if either one of the first and second edges Ed1and Ed2is located above the first inclined surface SL1or the second inclined surface SL2, the other may be located above the flat portion60a1or above the border between the first and second inclined surfaces SL1and SL2. Modification Examples Next, modification examples of the present example embodiment will be described. Initially, a first modification example of the MR element30will be described with reference toFIG.11. In the first modification example, the MR element30is an anisotropic magnetoresistive (AMR) element. In the first modification example, the MR element30includes a magnetic layer36given magnetic anisotropy, instead of the magnetization pinned layer32, the spacer layer33, and the free layer34shown inFIG.6. The magnetic layer36has a magnetization whose direction is variable depending on the direction of the external magnetic field. As described above, the MR element30is patterned to a shape that is long in the X direction. This gives the magnetic layer36magnetic shape anisotropy where the direction of the easy axis of magnetization is parallel to the X direction. The magnetic layer36has a first surface36ahaving a shape that is long in the X direction, a second surface36bopposite to the first surface36a, and an outer peripheral surface connecting the first surface36aand the second surface36b. The description of the shape of the MR element30with reference toFIGS.6and7also applies to the first modification example. The description of the shape of the MR element30applies to the shape of the first modification example, with the free layer34, the first surface34a, and the second surface34bin the description replaced with the magnetic layer36, the first surface36a, and the second surface36b. Next, a second modification example of the MR element30will be described with reference toFIG.12. In the second modification example, the MR element30has an oval planar shape. The MR element30includes a constant width portion30B, a first width changing portion30A, and a second width changing portion30C. The first width changing portion30A is located in front of the constant width portion30B in the −X direction. The second width changing portion30C is located in front of the constant width portion30B in the X direction. InFIG.12, the border between the constant width portion30B and the first width changing portion30A and the border between the constant width portion30B and the second width changing portion30C are shown by dotted lines. The constant width portion30B has a constant width (dimension in the direction parallel to the Y direction) regardless of the position in the X direction. The width of the first width changing portion30A decreases with increasing distance from the constant width portion30B. The width of the second width changing portion30C decreases with increasing distance from the constant width portion30B. The first and second width changing portions30A and30C are provided to control the magnetic domain structure of the free layer34, for example. In the first and second width changing portions30A and30C, a difference between the thickness T2at the second edge Ed2and the thickness T1at the first edge Ed1decreases with increasing distance from the constant width portion30B. This lowers the effect of reducing the concentration of magnetic charges at the portion of the MR element30near the end in the −X direction and the portion of the MR element30near the end in the X direction. However, the difference between the thicknesses T2and T1in the portions other than the foregoing is sufficiently large, whereby the effect of reducing the concentration of magnetic charges can be obtained. Next, a third modification example of the MR element30will be described with reference toFIGS.13to15. The MR element30shown inFIGS.13to15is a current in-plane (CIP) MR element.FIG.13is an explanatory diagram for describing the third modification example of the MR element30.FIG.14is a cross-sectional view showing a cross section at the position indicated by the line14-14ofFIG.13.FIG.15is a cross-sectional view showing a cross section at the position indicated by the line15-15ofFIG.13. For the sake of convenience,FIGS.14and15show only the MR element30and the support member60. The MR element30includes a layered film including the underlayer31, the magnetization pinned layer32, the spacer layer33, the free layer34, and the cap layer35(seeFIG.6). This layered film will be denoted by the reference numeral30M. In the third modification example, the dimension of the layered film30M in a direction parallel to the X direction is greater than that of the curved portion60a2of the opposed surface60aof the support member60in the direction parallel to the X direction. A part of the layered film30M is located on the curved portion60a2. Another part of the layered film30M is located on the flat portion60a1of the opposed surface60ain front of the curved portion60a2in the −X direction. Yet another part of the layered film30M is located on the flat portion60a1of the opposed surface60ain front of the curved portion60a2in the X direction. The portion of the layered film30M located on the curved portion60a2will hereinafter be referred to as a curved surface-located portion30M1. The portions of the layered film30M located on the flat portion60a1will be referred to as flat surface-located portions30M2. In the third modification example, the MR element30further includes a nonmagnetic metal film30N. As shown inFIGS.13and15, the nonmagnetic metal film30N covers the flat surface-located portions30M2. As shown inFIGS.13and15, the nonmagnetic metal film30N does not cover most of the curved surface-located portion30M1. The flat surface-located portions30M2are substantially the same as the MR element230of the comparative example shown inFIG.8. These portions therefore do not provide the effect of reducing the concentration of magnetic charges. Meanwhile, the curved surface-located portion30M1provides the effect of reducing the concentration of magnetic charges. In the third modification example, the flat surface-located portions30M2are covered with the nonmagnetic metal film30N, whereby only a signal corresponding to the resistance of the curved surface-located portion30M1can be detected from the MR element30. In other words, in the third modification example, only the curved surface-located portion30M1can substantially function as the MR element30. The effect of reducing the concentration of magnetic charges can thus be obtained. In the third modification example, if the flat surface-located portions30M2are sufficiently small compared to the curved surface-located portion30M1, the nonmagnetic metal film30N may be omitted. Second Example Embodiment A second example embodiment of the invention will now be described. Initially, a configuration of a magnetic sensor according to the present example embodiment will be described with reference toFIGS.16and17.FIG.16is a schematic diagram showing a part of the magnetic sensor according to the present example embodiment.FIG.17is a cross-sectional view showing a part of the magnetic sensor according to the present example embodiment. A magnetic sensor101according to the present example embodiment has the same configuration as that of the magnetic sensor1according to the first example embodiment except for the MR elements. The magnetic sensor101according to the present example embodiment includes MR elements130instead of the MR elements30according to the first example embodiment.FIG.17shows a cross section parallel to the YZ plane and intersecting an MR element130. The MR element130is located on the curved portion60a2of the opposed surface60aof the support member60. In particular, in the present example embodiment, the MR element130is located across the first inclined surface SL1and the second inclined surface SL2. The MR element130has a shape that is long in the X direction. The MR element130has a rectangular planar shape. The MR element130may be a spin-valve MR element or an AMR element. The following description will be given by using the case where the MR element130is a spin-valve MR element as an example. Like the MR element30shown inFIG.6according to the first example embodiment, the MR element130includes an underlayer31, a magnetization pinned layer32, a spacer layer33, a free layer34, and a cap layer35. For the sake of convenience, in the present example embodiment, the direction of the magnetization of the magnetization pinned layer32will be referred to as a Y direction or a −Y direction. The free layer34has magnetic shape anisotropy where the direction of the easy axis of magnetization is parallel to the X direction. Next, the MR element130will be described in more detail with reference toFIG.18.FIG.18is an explanatory diagram for describing the shape of the free layer34.FIG.18is an enlarged view of a part of the cross section shown inFIG.17. InFIG.18, the underlayer31and the cap layer35of the MR element130are omitted. As described in the first example embodiment, the free layer34has a first surface34a, a second surface34b, and an outer peripheral surface. The first surface34ahas a first edge Ed1and a second edge Ed2located at both lateral ends of the first surface34a. In the present example embodiment, the first edge Ed1is located on the first inclined surface SL1of the curved portion60a2. The second edge Ed2is located on the second inclined surface SL2of the curved portion60a2. The distance from the bottom surface60bof the support member60to the first edge Ed1and the distance from the bottom surface60bof the support member60to the second edge Ed2may be the same or different from each other. In a given cross section S intersecting the free layer34and perpendicular to the longitudinal direction of the first surface34a(direction parallel to the X direction), both the inclination angle θ1at the first edge Ed1and the inclination angle θ2at the second edge Ed2are greater than the inclination angle θp at a predetermined point P. In the present example embodiment, the predetermined point P refers to a point on the first surface34awhere the inclination angle θ is the smallest. In particular, in the present example embodiment, the inclination angle θ at the predetermined point P is 0. In the given cross section S, the inclination angle θ increases toward the first edge Ed1from the predetermined point P and increases toward the second edge Ed2from the predetermined point P. In the given cross section S, both the thickness T1at the first edge Ed1and the thickness T2at the second edge Ed2are smaller than the thickness Tp at the predetermined point P. In the given cross section S, the thickness T decreases toward the first edge Ed1from the predetermined point P and decreases toward the second edge Ed2from the predetermined point P. As in the first example embodiment, an angle that the opposed surface60aforms with the reference plane (bottom surface60bof the support member60) in a given cross section S will be denoted by the symbol ϕ. In the present example embodiment, the angle ϕ at the position on the opposed surface60aclosest to the second edge Ed2is greater than the angle ϕ at the position on the opposed surface60aclosest to the predetermined point P. The angle ϕ increases toward the position on the opposed surface60aclosest to the first edge Ed1from the position on the opposed surface60aclosest to the predetermined point P and increases toward the position on the opposed surface60aclosest to the second edge Ed2from the position on the opposed surface60aclosest to the predetermined point P. In the present example embodiment, the thickness T2at the second edge Ed2is smaller than that in the first example embodiment where the second edge Ed2is located near the top of the curved portion60a2. According to the present example embodiment, the concentration of magnetic charges at the portion of the outer peripheral surface of the free layer34near the second edge Ed2can thereby be reduced. According to the present example embodiment, the magnetization of the free layer34can thus be more effectively prevented from changing nonlinearly with respect to a change in the strength of the external magnetic field. As a result, according to the present example embodiment, the range where the detection signals generated by the magnetic sensor101change linearly can be expanded. The configuration, operation and effects of the present example embodiment are otherwise the same as those of the first example embodiment. Third Example Embodiment A third example embodiment of the invention will now be described. Initially, a configuration of a magnetic sensor according to the present example embodiment will be described with reference toFIG.19.FIG.19is a cross-sectional view showing a part of the magnetic sensor according to the present example embodiment. A configuration of the magnetic sensor301according to the present example embodiment differs from that of the magnetic sensor1according to the first example embodiment in the following respect. The magnetic sensor301according to the present example embodiment includes MR elements330instead of the MR elements30according to the first example embodiment.FIG.19shows a cross section parallel to the YZ plane and intersecting an MR element330. The opposed surface60aof the support member60includes at least one curved portion60a3not parallel to the bottom surface60bof the support member60, instead of the curved portion60a2according to the first example embodiment. As shown inFIG.19, the curved portion60a3is a concave surface recessed toward the bottom surface60b. The curved portion60a3has a curved shape (arch shape) curved to be recessed toward the bottom surface60b(−Z direction) in a given cross section parallel to the YZ plane. In the given cross section parallel to the YZ plane, the distance from the bottom surface60bto the curved portion60a3is the smallest at the center of the curved portion60a3in a direction parallel to the Y direction (hereinafter, referred to simply as the center of the curved portion60a3). The at least one curved portion60a3extends along the X direction. The overall shape of the at least one curved portion60a3is a semicylindrical surface formed by moving the curved shape shown inFIG.19along the X direction. The insulating layer62of the support member60has a cross-sectional shape such that the curved portion60a3is formed in the opposed surface60a. Specifically, the insulating layer62has a cross-sectional shape recessed in the −Z direction in a given cross section parallel to the YZ plane. A portion of the curved portion60a3from an edge at the end of the curved portion60a3in the Y direction to the center of the curved portion60a3will be referred to as a first inclined surface and be denoted by the reference symbol SL11. A portion of the curved portion60a3from an edge at the end of the curved portion60a3in the −Y direction to the center of the curved portion60a3will be referred to as a second inclined surface and be denoted by the reference symbol SL12. Both the first and second inclined surfaces SL11and SL12are inclined relative to the reference plane, i.e., the bottom surface60b. In the present example embodiment, the entire MR element330is located on the first inclined surface SL11or the second inclined surface SL12.FIG.19shows how the MR element30is located on the first inclined surface SL11. The MR element330has a shape that is long in the X direction. The MR element330has a rectangular planar shape. The MR element330may be a spin-valve MR element or an AMR element. The following description will be given by using the case where the MR element330is a spin-valve MR element as an example. Like the MR element30shown inFIG.6according to the first example embodiment, the MR element330includes an underlayer31, a magnetization pinned layer32, a spacer layer33, a free layer34, and a cap layer35. The free layer34has magnetic shape anisotropy where the direction of the easy axis of magnetization is parallel to the X direction. Next, the MR element330will be described in more detail with reference toFIG.20.FIG.20is an explanatory diagram for describing the shape of the free layer34.FIG.20is an enlarged view of a part of the cross section shown inFIG.19. InFIG.20, the underlayer31and the cap layer35of the MR element330are omitted. As described in the first example embodiment, the free layer34has a first surface34a, a second surface34b, and an outer peripheral surface. The first surface34ahas a first edge Ed1and a second edge Ed2located at both lateral ends of the first surface34a. In the present example embodiment, both the first and second edges Ed1and Ed2are located above the first inclined surface SL11of the curved portion60a3or both the first and second edges Ed1and Ed2are located above the second inclined surface SL12of the curved portion60a3. The distance from the bottom surface60bof the support member60to the first edge Ed1is greater than the distance from the bottom surface60bof the support member60to the second edge Ed2. The relationship between the inclination angle θ1at the first edge Ed1, the inclination angle θ2at the second edge Ed2, and the inclination angle θp at the predetermined point P in a given cross section S intersecting the free layer34and perpendicular to the longitudinal direction of the first surface34a(direction parallel to the X direction) is the same as that in the first example embodiment. The relationship between the thickness T1at the first edge Ed1, the thickness T2at the second edge Ed2, and the thickness Tp at the predetermined point P in the given cross section S is also the same as that in the first example embodiment. For the sake of convenience, an imaginary surface is assumed by extending the second surface34balong the curved portion60a3, and the thickness T1is defined as the distance between the first surface34aand the imaginary surface in the direction perpendicular to the first surface34a. Like the MR element130according to the second example embodiment, the MR element330may be located across the first inclined surface SL11and the second inclined surface SL12. The configuration, operation and effects of the present example embodiment are otherwise the same as those of the first or second example embodiment. The technology is not limited to the foregoing example embodiments, and various modifications may be made thereto. For example, the number and arrangement of MR elements and the number and arrangement of curved portions are not limited to those described in the example embodiments, and may be freely chosen as long as the requirements set forth in the claims are satisfied. The MR elements according to the technology may be located on a flat surface parallel to the reference plane as long as the requirement that the thickness T1at the first edge Ed1be smaller than the thickness Tp at a predetermined point P in a given cross section S is satisfied. The MR element including the free layer34having such a thickness T can be implemented, for example, by so-called wedge deposition capable of forming an inclined film thickness. Obviously, various modification examples and variations of the technology are possible in the light of the above teachings. Thus, it is to be understood that, within the scope of the appended claims and equivalents thereof, the technology may be practiced in other embodiments than the foregoing most example embodiments.
68,471
11860252
DETAILED DESCRIPTION FIG.1shows a schematic representation of an embodiment of a magnetic resonance tomography unit1with an embodiment of a transmission interference suppression facility70. The magnetic unit10has a field magnet11that generates a static magnetic field BO for an orientation of nuclear spins of samples or of the patient100in a recording region. The recording region is characterized by an extremely homogeneous static magnetic field BO, where the homogeneity relates, for example, to the magnetic field strength or the value. The recording region is almost spherical and arranged in a patient tunnel16that extends in a longitudinal direction2through the magnetic unit10. A patient couch30may be moved in the patient tunnel16by the motion unit36. Conventionally, the field magnet11is a superconducting magnet that may provide magnetic fields with a magnetic flux density of up to 3T or above. For lower field strengths, permanent magnets or electromagnets with normal-conducting coils may also be used, however. Further, the magnetic unit10has gradient coils12that, for spatial differentiation of the detected mapping regions in the examination volume, are configured to overlay the magnetic field BO with variable magnetic fields in three spatial directions. The gradient coils12are conventionally coils made of normal-conducting wires that may generate mutually orthogonal fields in the examination volume. The magnetic unit10also has a body coil14that is configured to irradiate a radio-frequency signal supplied via a signal line into the examination volume and to receive resonance signals emitted by the patient100and output the resonance signals via a signal line. Hereinafter, the term “transmitting antenna” designates an antenna via which the radio-frequency signal is emitted for excitation of the nuclear spins. This may be the body coil14, but also a local coil50with transmission function. A control unit20supplies the magnetic unit10with the different signals for the gradient coils12and the body coil14and evaluates the received signals. The control unit20thus has a gradient actuator21that is configured to supply the gradient coils12via supply lines with variable currents, which, coordinated in terms of time, provide the desired gradient fields in the examination volume. Further, the control unit20has a radio-frequency unit22that is configured to generate a radio-frequency pulse with a predefined course over time, amplitude, and spectral power distribution for excitation of a magnetic resonance of the nuclear spins in the patient100. Pulse powers in the range of kilowatts may be achieved in the process. The excitation signals may be emitted via the body coil14or else via a local transmitting antenna into the patient100. A controller23communicates via a signal bus25with the gradient controller21and the radio-frequency unit22. Arranged on the patient100as a first receiving coil is a local coil50, which is connected by a connection line33to the radio-frequency unit22and a receiver of the radio-frequency unit22. In one embodiment, the body coil14may be a first receiving antenna within the present embodiments. The magnetic resonance tomography unit1has an embodiment of a transmission interference suppression facility70. This has a sensor or, for example, a plurality of sensors71that are configured to detect a radio-frequency signal with the Larmor frequency of the magnetic resonance tomography unit (e.g., scattered radiation of an excitation signal of the magnetic resonance tomography unit) and to relay the radio-frequency signal as a signal to the transmission interference suppression controller72. These may be, for example, magnetic or electric antennas or other detectors for radio-frequency electric and/or magnetic alternating fields. The sensors71enclose the transmitting antenna at least in one plane (e.g., the horizontal plane) or in its entirety in all spatial directions to reduce propagation of emitted noise of the magnetic resonance tomography unit1into the surroundings. In one embodiment, an embodiment of partial shielding80is arranged in other spatial directions in which no sensors71are arranged. InFIG.1, this is configured as an electrically shielding closure of the patient tunnel16at the opening of the patient tunnel16opposing the transmission interference suppression antenna60and the sensor71. A partial cage may also be provided as a partial shielding80that at least partially surrounds the magnetic unit10with the transmitting antenna, and only leaves open, for example, openings that provide the required access to the patient100. There is an interdependence between the number of sensors71and transmission interference suppression antennas60on the one side and the extent of the partial shielding80on the other side. For spatial directions in which the partial shielding extends with respect to the transmitting antenna, the propagation of signals of the transmitting antenna is reduced and the requirements for an active interference suppression by sensors71and transmission interference suppression antennas60are reduced. In these spatial directions, the density of the sensors71and transmission interference suppression antennas60may then be reduced or even completely omitted. For example, with a partial shielding80at an opening of the patient tunnel16inFIG.1, only one active interference suppression is required in the direction of the opposing opening. In one embodiment, the sensor(s)71are arranged in a far field of the transmitting antenna in which the electric and magnetic field of the electromagnetic radio-frequency alternating field are in phase and emitted electromagnetic waves propagate in the space. Since the sensor(s)71are located in the far field, the field strengths, downstream of the sensor71, viewed from the transmitting antenna, may in each case also be easily inferred via the measured value of the sensor71. As the sensor is arranged at a spacing that matches a spacing predetermined for a limit value, observance of this limit value may be provided with the transmission interference suppression facility70of the present embodiments. Alternatively, the sensor(s)71may not be arranged in the far field. Instead, in a calibration process using a calibration antenna in the far field or at a measuring point for EMC, a test signal that is emitted by the transmitting antenna and/or the transmission interference suppression antennas60may be detected. From this, in each case, transfer functions and corresponding inverse functions may be determined in order to determine a transmission interference suppression signal, which, when emitted via the transmission interference suppression antenna(s)60at the measuring point, forms a destructive interference with the excitation pulse of the transmitting antenna and thus reduces the electromagnetic emission. The transmission interference suppression antenna60may be arranged in the proximity of the transmitting antenna in the patient tunnel16(e.g., on or with a plurality of transmission interference suppression antennas, around the opening of the patient tunnel). The transmission interference suppression antenna thus lies on the propagation path of the electromagnetic wave between the transmitting antenna and the sensor71. The same also applies to a plurality of transmitters. In one embodiment, the magnetic resonance tomography unit1also has a calibration antenna75. The calibration antenna75is configured to ascertain test pulses emitted by the transmitting antenna or the transmission interference suppression antenna60. In this case, detecting may be ascertaining an electric and or magnetic field strength. For example, amplitude and/or phase are detected in the process. The calibration antenna75may be, for example, a pickup coil or an electric antenna such as a dipole. FIG.5schematically shows the relative arrangement of the transmitting antenna, the transmission interference suppression antennas60, the sensors71, and the calibration antenna75relative to each other. The representation is two-dimensional. The same may also be provided, for example, three-dimensionally, however. The transmission interference suppression antennas60surround the transmitting antenna as a closed casing, and this is surrounded by a closed or partial casing including sensors71. In one embodiment, however, the transmission interference suppression antennas60may surround the transmitting antennas with the casing including sensors71at a greater spacing (e.g., in the far field). The transmitting antenna (e.g., the body coil14) is surrounded by the transmission interference suppression antennas60, and these are surrounded by the sensors71. These form a closed ring or any other closed series of curves around the transmitting antenna. With transmission interference suppression in a three-dimensional direction, the sensors71form a closed casing or surface. With transmission interference suppression in particular sectors or spatial directions, the sensors71form corresponding partial casings in the spatial directions, so that the projection of the partial casings outwardly from the transmitting antenna cover these spatial directions. The spacing between respectively adjacent sensors71does not exceed a maximum spacing, which is, for example, less than a quarter, an eighth, or a tenth of the wavelength of the excitation signal. The sensors71detect field components tangentially to the enveloping curve. Based on the electromagnetic field equations, it is thereby possible to replace the source of the fields in the interior of the casing with known virtual sources on the surface of the casing. The fields of the transmitting antenna and of the transmission interference suppression antennas60outside of the casing including sensors71may thus be predicted with the sensors71. This method of the virtual sources on the enclosing surface is also referred to as a Huygens' box. Transfer functions for a known excitation signal at the transmitting antenna or a transmission interference suppression antenna60and any desired point outside of the casing comprising sensors71may thus be determined. The transmission interference suppression antenna(s)60may be arranged in the proximity of the transmitting antenna in the patient tunnel16(e.g., at, or in the case of a plurality of transmission interference suppression antennas71, around the opening). The transmission interference suppression antenna71thus lies on the propagation path of the electromagnetic wave between the transmitting antenna and the sensor71. The same also applies to a plurality of transmission interference suppression antennas60. The position of the calibration antenna75may be variable, so that the first test pulse or the second test pulse may be detected at different locations with the calibration antenna75. In one embodiment, a plurality of calibration antennas75may be provided at different locations around the transmitting antenna and/or the transmission interference suppression antennas60. The locations at which the test pulse is detected are in each case more remote from the transmitting antenna and/or the transmission interference suppression antenna(s)60than the sensors71. If the sensors71form a casing or a partial casing around the transmitting antenna, in that the sensors71form the corner points of a polyhedron and the transmitting antenna and/or the transmission interference suppression antennas60are located in an interior of the polyhedron, the calibration antenna75is thus located outside of the polyhedron when detecting the test pulse. In one embodiment, the spacing of the calibration antenna75from the transmitting antenna or the transmission interference suppression antenna(s)60is at least as large as results from the requirements of the EMC regulations. For example, a spacing of 10 m or less at which the field is to be below a defined threshold is to be provided. A spacing of less than 8 m or 5 m may also be provided. In one embodiment, the spacing may be at least a multiple of the wavelength of the excitation pulse in air. The signals of the test pulse detected by the calibration antenna(s)75may be used by the transmission interference suppression facility70in order to correct the transfer functions for the transmitting antenna and the transmission interference suppression facility or functions derived therefrom. When ascertaining the transmission interference suppression signal, assumptions are made that are based on the measurements of the sensors only for the interior of the polyhedron comprising sensors71. Assumptions are made (e.g., a free space-propagation of the waves or a reflection by ceiling or floor for the further propagation outside). The more detailed properties of the surroundings may only be taken into account by the test pulse detected by the calibration antenna75. For example, if the calibration antenna75is arranged at a location that corresponds to a test spacing for an EMC measurement, observance of the EMC threshold value may thus be provided at this location, even without a sensor or an antenna being arranged there during operation. FIG.2schematically shows one possible embodiment of a transmission interference suppression facility70in detail. For a better overview, only one sensor60is symbolically represented inFIG.2, although the transmission interference suppression facility70has a plurality of sensors60, as is indicated, for example, below inFIG.3or5. The sensor60has an antenna that converts the electric and/or magnetic radio-frequency alternating field of the pulse emitted by the transmitting antenna into a current and/or voltage in a conductor. In one embodiment, the sensor60detects components of the electromagnetic field tangentially to the enclosing casing, as is explained in relation toFIG.5. For example, the antenna may be an induction loop or have two loops perpendicular to each other for detecting two tangential components. The electric signal generated in this way is conventionally amplified by a low noise amplifier (LNA) still in the sensor before the electric signal is relayed via a signal link for further processing in the transmission interference suppression facility70. FIG.2represents analog signal processing as an exemplary embodiment. Basically, the concept is that an excitation signal propagating into the surroundings as an electromagnetic wave is reduced by destructive interference, and thus, the emission of the magnetic resonance tomography unit1into the surroundings is kept below a regulative limit value. According to the present embodiments, the sensor71serves as a measuring device for the strength of the propagating electromagnetic wave of the transmitting antenna and the transmission interference suppression antennas60, for example, to ascertain the transfer functions using test pulses. The transmission interference suppression facility70is then to obtain information about the excitation signal in a different way (e.g., as illustrated via a signal line from the radio-frequency unit22or the controller23). The information is capable of generating a signal for a destructive interference. This may be, for example, the signal that is supplied in the radio-frequency unit22to an output stage for generating the excitation signal or an attenuated output signal of the output stage. This may also be a digitized form of the excitation signal or parameters or signals from which the excitation signal is generated and which define it sufficiently for the generation of a differential signal. In one embodiment, however, the information about the excitation signal may be detected by a current sensor (e.g., a directional coupler at the foot of the transmitting antenna, such as the body coil14). The directional coupler generates a signal that is proportional to the current that flows into the transmitting antenna, and therewith, also to the magnetic alternating field generated by the transmitting antenna. In one embodiment, two directional couplers that in each case detect the current flowing in and a reflected current may be used in order, by calculating the difference, to detect the current through the transmitting antenna more accurately. The signal proportional to the current is relayed to the transmission interference suppression facility70. A scaled excitation signal is then subjected (e.g., by the phase shifter73) to a phase shift and then amplified by the radio-frequency amplifier74before the excitation signal is emitted via the transmission interference suppression antenna60. The transmission interference suppression controller72adjusts the parameters in the process (e.g., phase shift and amplification) as a function of the signal of the sensor71. As already explained, this may take place using the transfer functions. The transmission interference suppression antennas60are arranged in the proximity of the transmitting antenna in the patient tunnel16(e.g., around the opening). A greater spacing may also be provided, however, to reduce a reaction of the transmission interference suppression antennas71to the excitation of the nuclear spins. In one embodiment, however, the transmission interference suppression antennas60may be arranged in the interior of the patient tunnel16. A further embodiment of the transmission interference suppression facility is represented inFIG.3. FIG.3illustrates a plurality of sensors71and also a plurality of transmission interference suppression antennas60. These are distributed as a far as possible over different spatial directions with respect to the transmitting antenna (e.g., the body coil14), as was already stated in relation toFIG.5. In order to supply this plurality of transmission interference suppression antennas60with different signals, as is necessary for suppression of the emitted interference in different directions, the controllable radio-frequency amplifier74has a plurality of independent amplifier channels for amplification of the individual signals. In the embodiment inFIG.3, the transmission interference suppression controller72has a signal processing resource (e.g., a Digital Signal Processor (DSP) or an FPGA). In this exemplary embodiment, the sensors71already digitize the signals and relay the signals to the transmission interference suppression controller72. As already described, the phase shifts and attenuation/amplification factors may be ascertained, for example, by the transfer functions. The phase shift and amplification/attenuation may then be carried out by corresponding digital computer operation. In one embodiment, the signals picked up with the calibration antenna75may be used to correct the transfer functions. In one embodiment, however, these acts may take place in an analog signal processing, with the mixing taking place, for example, by way of a crossbar matrix with adjustable couplings and phase shift at intersection points. Further, inFIG.4, the radio-frequency amplifiers74are arranged in the immediate proximity of the transmission interference suppression antennas60and are configured as a current source with internal resistance approaching zero, so that the transmission interference suppression antennas60, in the case of the same transmission interference suppression signal, generate a magnetic alternating field largely independent of the antenna impedance, even if, for example, the impedance changes with the frequency. FIG.4shows an exemplary flowchart of the method for operation of the transmission interference suppression facility70in a magnetic resonance tomography unit1. In act S50, the transmission interference suppression facility70receives information about the excitation signal. In the simplest case, this may be the excitation signal itself or a signal proportional to the excitation signal (e.g., attenuated by a factor of 20 dB, 40 dB, 60 dB or more). With predetermined excitation signals for known sequences (e.g., a sinc pulse), it may be sufficient, however, if scaling factor, center frequency, phase relationship, and/or duration are given as the information. For example, the baseband signal of the excitation signal and the mixing frequency may also be provided. In act S60, the transmission interference suppression controller determines a transmission interference suppression signal as a function of the information such that on emitting the transmission interference suppression signal via the transmission interference suppression antenna, a field strength of the excitation signal is reduced at a predetermined location. For example, a calculation based on Maxwell's field equation and a known geometry, in which the attenuation and phase shift of the excitation signal is ascertained at the sensor from the known excitation signal (e.g., similarly, an attenuation and a phase shift), may be provided. Using the information about the excitation signal, a corresponding transmission interference suppression signal with the inverse phase shift and corresponding amplification may then be determined, so that a negative interference with an attenuation greater than 6 dB, 12 dB, or more is achieved. In act S70, the transmission interference suppression signal is then emitted via the transmission interference suppression antenna60. As already illustrated, the arrangement of the sensors71on a casing around the transmitting antenna and the detection of the fields by the sensors71allows, in line with a Huygens' box, the field source in the interior of the casing to be replaced by a virtual source on the casing and thus changes (e.g., due to the patient) to be co-detected and taken into account. In one embodiment, however, instead of the calculation in act S10, a test pulse is emitted with the transmitter via the transmitting antenna, and then, in act S20, a field strength produced by the test pulse is detected by the sensors71. The sensors71may detect, for example, the electric or the magnetic component. In one embodiment, as already explained in relation toFIG.5, the sensors71detect components of the fields that are oriented tangentially to the virtual casing on which the sensors71are arranged. In act S30, using the known properties of the test pulse and the properties detected by the sensor71, a transfer function between transmitting antenna and a predetermined point in the far field outside of the casing including sensors71is determined by the transmission interference suppression facility70. In one embodiment, at least a delay (e.g., the phase shift) and the attenuation are determined. As already explained, this may take place using a Huygens' box. In act S60, the transmission interference suppression signal is ascertained as a function of the transfer function. As already stated in relation to the calculation, this may be achieved with the transfer function determined via the test pulse by a corresponding inverse phase shift and amplification, or, more generally, by the inverse transfer function. Determining the transfer function(s) via a test pulse allows conditions that are not accessible to the calculation to also be detected since, for example, the properties of the patient are only partially known. In one embodiment, a transfer function between one or more transmission interference suppression antenna(s)60and a predetermined location in the far field may be determined in the same way. Different variations of the method may then be provided. For example, the transmission interference suppression signal may be determined directly from the transfer functions and the information about the excitation signal. In one embodiment, the transfer function(s) are determined on installation of the magnetic resonance system1. In one embodiment, determining takes place at least in each case before an image capture, however, in order to take into account the change due to the patient. In the previously described embodiment, the method makes assumptions about the propagation in the surroundings of the magnetic resonance tomography unit1outside of the polyhedron including sensors71. To be able to take the propagation into account more effectively, the properties of the surroundings may be detected by measurements with a calibration antenna75. For this, the calibration antenna may be arranged in act S10at a location at a greater spacing from the transmitting antenna than the sensors71. In one embodiment, the spacing corresponds to a spacing, predefined by an EMC regulation, for a limit value of the emitted field. In act S21, a field strength generated by the first test pulse via the transmitting antenna is detected with the calibration antenna5. In one embodiment, the detection may take place via a receiver of the magnetic resonance tomography unit1, which is connected via a wired or wireless signal link to the calibration antenna. In one embodiment, however, the data may be acquired with a test receiver, and correction parameters derived therefrom may be stored in the transmission interference suppression controller72. In one embodiment, a phase delay is also detected in the process in order to subsequently be able to generate a destructive interference during transmission interference suppression. In one embodiment, a second test pulse may be detected correspondingly act step S31with the calibration antenna75. The second test pulse is emitted via the transmission interference suppression antennas60. This may take place in each case for different positions of the calibration antenna75and for all transmission interference suppression antennas60. For example, for one position of the calibration antenna75, in each case successively, the first test pulse may be emitted via the transmitting antenna and detected via the calibration antenna75and the second test pulse via all transmission interference suppression antennas60. In an embodiment, the first test pulse is also detected with the sensors71in order to ascertain a transfer function between transmitting antenna and the sensors71. In one embodiment, acts S10to S21or S31are repeated with a calibration antenna75positioned at a different location. The position of the calibration antenna75may be changed, or a different calibration antenna75may be used at a different position. Once all measured values have been acquired, the target values are determined at the sensors71. A vector V is sought, which describes the actuation of the transmission interference suppression antennas60, so that this results in an elimination of the fields in the far field. In principle, V is obtained from the following equation (e.g., at the H-field): HBC+HTxAux( )*V=0. With suitable matrix notation, V may be determined by matrix inversion (e.g., pseudoinverse). Firstly, V is determined for the case where the H-field from the far-field measurement at the movable calibration antennas75is used HBC+HTxAux(far field)*V=0. As a rule, “=0” is not always satisfied, but it is important that the value is less than the EMC limit value. The value for V may also be obtained from a minimization of the function, therefore. The ascertained vector V is now taken, and the fields, which remain in the near field at the Tx sense antennas, are calculated. These are to be saved for later as the target field HBC+HTxAux(near field)*V=H_target(Tx sense). During regular operation of the MR unit, only the fixed sensors are then still present. It continues to be provided that the EMC emission conditions are observed. For this, a series of test pulses is again sent during the current measuring situation to all antennas, and a new actuation vector V2 that suppresses the far-field emission or reduces the far-field emission such that the far-field emission remains under the EMC limits is sought. The target field ascertained during the calibration is used for this, and V2 is determined according to the following equation: HBC−H_target(Tx sense)+HTxAux(near field)*V2=0. This information is subsequently used in act S60of determining the interference suppression signal as a function of the far field transfer function to correct the transmission interference suppression signal. FIG.6shows a further exemplary flowchart of the method for operation of the inventive transmission interference suppression facility70in an embodiment of the magnetic resonance tomography unit1. In act S50, the transmission interference suppression facility70receives information about the excitation signal. In the simplest case, this may be the excitation signal itself or a signal proportional to the excitation signal (e.g., attenuated by 20 dB, 40 dB, 60 dB, or more). This may be detected, for example, in act S40by a pick-up coil from the generated field or by a directional coupler from the signal supplied to the transmitting antenna. With predetermined excitation signals for known sequences, for example, a sinc pulse may also be sufficient, however, if scaling factor, center frequency, phase relationship, and/or duration are given as the information. For example, the baseband signal of the excitation signal and the mixing frequency may also be provided. In act S60, the transmission interference suppression controller determines a transmission interference suppression signal as a function of the information such that on emitting the transmission interference suppression signal via the transmission interference suppression antenna, a field strength of the excitation signal is reduced at a predetermined location. For example, a calculation based on Maxwell's field equation and a known geometry, in which the attenuation and phase shift of the excitation signal are ascertained at the sensor from the known excitation signal (e.g., similarly, an attenuation and a phase shift), may be provided. Using the information about the excitation signal, a corresponding transmission interference suppression signal may then be determined with the inverse phase shift and corresponding amplification, so that a negative interference with an attenuation greater than 6 dB, 12 dB, or more is achieved. In act S70, the transmission interference suppression signal is then emitted via the transmission interference suppression antenna60. In one embodiment, however, instead of the calculation, in act S10of emitting, a test pulse is emitted with the transmitter via the transmitting antenna, and then, in act S20of detecting, a field strength produced by the test pulse is detected by the sensor. The sensor may detect, for example, the electric or the magnetic component. In act S30, a transfer function between transmitting antenna and sensor71is determined by the transmission interference suppression facility70using the known properties of the test pulse and the properties detected by the sensor71. For example, an autocorrelation algorithm may be provided. In one embodiment, at least a delay (e.g., the phase shift) and the attenuation are ascertained. In act S60, the transmission interference suppression signal is ascertained as a function of the transfer function. As already stated in relation to the calculation, this may be achieved with the transfer function determined via the test pulse by a corresponding inverse phase shift and amplification, or, more generally, by the inverse transfer function. Determining the transfer function(s) via a test pulse allows even conditions that are not accessible to the calculation to be detected since, for example, the properties of the patient are only partially known. In one embodiment, a transfer function between one or more transmission interference suppression antenna(s)60and one or more sensor(s)71may be determined in the same way. For example, in act S31, a predetermined second test pulse may be emitted with the transmission interference suppression facility via the transmission interference suppression antenna. In one embodiment, the previously described test pulse and the second test pulse may be identical. In act S32, the field strength produced by the second test pulse is detected by the plurality of sensors, and then, a far field transfer function for the transmission interference suppression antenna is ascertained as a function of the second test pulse by the transmission interference suppression facility; the function is then taken into account when determining the transmission interference suppression signal in act S60. Different variations of the method may then be provided. For example, the transmission interference suppression signal may be determined directly from the transfer functions and the information about the excitation signal. In one embodiment, the transfer function(s) may be determined once during the installation of the magnetic resonance tomography unit1. In one embodiment, determining takes place at least in each case before an image capture, however, in order to take into account the change due to the patient. In one embodiment, the transfer functions or the parameters of the transfer functions, such as attenuation and phase shift, may also be permanently adjusted by an optimization method in which, for example, the energy of the signal detected by the sensors71, resulting from excitation signal and transmission interference suppression signal, is minimized. At the same time, the emission of the excitation signal in the surroundings of the magnetic resonance tomography unit is minimized thereby since the sensors are already arranged in the far field and thereby indicate a measure of the fields at a large spacing. Although the invention has been illustrated and described in detail by the exemplary embodiments, the invention is not limited by the disclosed examples, and a person skilled in the art may derive other variations herefrom without departing from the scope of the invention. The elements and features recited in the appended claims may be combined in different ways to produce new claims that likewise fall within the scope of the present invention. Thus, whereas the dependent claims appended below depend from only a single independent or dependent claim, it is to be understood that these dependent claims may, alternatively, be made to depend in the alternative from any preceding or following claim, whether independent or dependent. Such new combinations are to be understood as forming a part of the present specification. While the present invention has been described above by reference to various embodiments, it should be understood that many changes and modifications can be made to the described embodiments. It is therefore intended that the foregoing description be regarded as illustrative rather than limiting, and that it be understood that all equivalents and/or combinations of embodiments are intended to be included in this description.
34,830
11860253
DETAILED DESCRIPTION In the following detailed description, numerous specific details are set forth by way of examples in order to provide a thorough understanding of the relevant disclosure. However, it should be apparent to those skilled in the art that the present disclosure may be practiced without such details. In other instances, well-known methods, procedures, systems, components, and/or circuitry have been described at a relatively high-level, without detail, in order to avoid unnecessarily obscuring aspects of the present disclosure. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present disclosure. Thus, the present disclosure is not limited to the embodiments shown, but to be accorded the widest scope consistent with the claims. The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting. As used herein, the singular forms “a,” “an,” and “the” may be intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprise,” “comprises,” and/or “comprising,” “include,” “includes,” and/or “including,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that the terms “system,” “engine,” “unit,” “module,” and/or “block” used herein are one method to distinguish different components, elements, parts, sections, or assemblies of different levels in ascending order. However, the terms may be displaced by another expression if they achieve the same purpose. Generally, the words “module,” “unit,” or “block,” as used herein, refer to logic embodied in hardware or firmware, or to a collection of software instructions. A module, a unit, or a block described herein may be implemented as software and/or hardware and may be stored in any type of non-transitory computer-readable medium or another storage device. In some embodiments, a software module/unit/block may be compiled and linked into an executable program. It will be appreciated that software modules can be callable from other modules/units/blocks or from themselves, and/or may be invoked in response to detected events or interrupts. Software modules/units/blocks configured for execution on computing devices may be provided on a computer-readable medium, such as a compact disc, a digital video disc, a flash drive, a magnetic disc, or any other tangible medium, or as a digital download (and can be originally stored in a compressed or installable format that needs installation, decompression, or decryption prior to execution). Such software code may be stored, partially or fully, on a storage device of the executing computing device, for execution by the computing device. Software instructions may be embedded in firmware, such as an EPROM. It will be further appreciated that hardware modules/units/blocks may be included in connected logic components, such as gates and flip-flops, and/or can be included of programmable units, such as programmable gate arrays or processors. The modules/units/blocks or computing device functionality described herein may be implemented as software modules/units/blocks, but may be represented in hardware or firmware. In general, the modules/units/blocks described herein refer to logical modules/units/blocks that may be combined with other modules/units/blocks or divided into sub-modules/sub-units/sub-blocks despite their physical organization or storage. The description may be applicable to a system, an engine, or a portion thereof. It will be understood that when a unit, engine, module or block is referred to as being “on,” “connected to,” or “coupled to,” another unit, engine, module, or block, it may be directly on, connected or coupled to, or communicate with the other unit, engine, module, or block, or an intervening unit, engine, module, or block may be present, unless the context clearly indicates otherwise. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. The terms “first,” “second,” “third,” etc. are used to distinguish similar objects and does not denote a specific ranking of the objects. These and other features, and characteristics of the present disclosure, as well as the methods of operation and functions of the related elements of structure and the combination of parts and economies of manufacture, may become more apparent upon consideration of the following description with reference to the accompanying drawings, all of which form a part of this disclosure. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended to limit the scope of the present disclosure. It is understood that the drawings are not to scale. Provided herein are systems and apparatus related to the transit of superconducting devices. A superconducting device may refer to a device made of a superconducting material (e.g., niobium-titanium (NbTi), niobium-tin (Nb3Sn), niobium-aluminum (Nb3Al)). A superconducting device may need to be exposed at a relatively low temperature (e.g., approximately absolute zero) to maintain the superconductivity according to different superconducting materials. In some embodiments, the superconducting device may be applied in various fields such as rail transportation, power electronics, medical devices. etc. For illustration purposes, a medical device may be taken as an example to facilitate the understanding of the superconducting device, which is not intended to limit the scope of the present disclosure. Merely by way of example, the medical device may include an MR scanner, an MR and radiation therapy (MR-RT) scanner, a positron emission computed tomography and MR (PET-MR) scanner, etc., that includes a superconducting magnet. An aspect of the present disclosure relates to an apparatus for the transit of the superconducting magnet. The apparatus may include a compressor configured to compress a cryogen after the cryogen cools a superconducting magnet in the transit of the superconducting magnet. The apparatus may also include a power supply device configured to provide power to at least the compressor and a thermal management system (e.g., including an air-cooled device) configured to be in thermal communication with (e.g., cool) the compressor. The apparatus may further include a container configured to accommodate at least one of the compressor, the thermal management system, and the power supply device. Accordingly, the compressor may compress the cryogen after the cryogen cools a superconducting magnet during a transit of the superconducting magnet and the compressed cryogen may be cooled by a refrigeration device of the cryostat. The cooled cryogen may be used to cool the superconducting magnet, which may reduce the loss of the cryogen and provide a low temperature for the superconducting magnet, thereby keeping the superconducting magnet in a superconducting state during the transit of the superconducting magnet. In some embodiments, the apparatus may include a thermal management system configured to cool the compressor. The thermal management system may include an air-cooled device. According to some embodiments of the present disclosure, instead of a water-cooled device that has a relatively complex structure (e.g., including a water pump, pipes, pipe heat dissipation structures, etc.) and relatively large volume, the air-cooled device may be in fluid communication with the refrigeration device of the cryostat to cool the compressor, which can reduce a volume and complexity of the apparatus and has a relatively good heat dissipation performance during the transit of the superconducting magnet. Therefore, the apparatus may be convenient for transporting or shipping the superconducting magnet by a vehicle, a ship, an airplane, etc. In some embodiments, the container may include a first compartment and a second compartment which accommodate the compressor and the power supply device separately, further improving the heat dissipation performance. In some embodiments, a damping device may be mounted in the container to support the compressor, which can reduce the vibration of the compressor during the transit of the superconducting magnet. In some embodiments, an exhaust component including a shelter part may be mounted outside the container, which can avoid foreign matters (e.g., rain, dust) from dropping into the compressor. FIG.1is a schematic diagram illustrating an exemplary apparatus for the transit of a superconducting device according to some embodiments of the present disclosure. The apparatus100may be configured to refrigerate the superconducting device, e.g., during the transit of the superconducting device. For illustration purposes, the apparatus100may be configured to refrigerate a superconducting magnet of an MR scanner during the transit of the superconducting magnet. During the transit of the superconducting magnet, the superconducting magnet may be accommodated in a cryostat (not shown). The apparatus100may be physically connected with the cryostat to facilitate the refrigeration of the superconducting magnet. In some embodiments, the cryostat may include a refrigeration device110(e.g., a cold head assembly) and a cryogen (e.g., helium or a hyperpolarized material) filled in the cryostat. The apparatus100may include at least a compressor102and a thermal management system104. The refrigeration device110and the compressor102may cooperate with each other to refrigerate the superconducting magnet using the cryogen. For example, the compressor102may be configured to compress the cryogen after the cryogen cools the superconducting magnet. The refrigeration device110may be configured to supply refrigeration of the superconducting magnet using the compressed cryogen. The thermal management system104may be configured to be in thermal communication with (e.g., cool) the compressor102and/or the compressed cryogen. As shown inFIG.1, the connection between the apparatus100and the cryostat may be achieved by physically connecting the compressor102with the refrigeration device110to form a circulating loop through which the cryogen may be circulated. Merely by way of example, to form the circulating loop, an inlet A1of the compressor102may be connected with an outlet B2of the refrigeration device110via a channel152, and an outlet A2of the compressor102may be connected with an inlet B1of the refrigeration device110via a channel153. In some embodiments, the cryogen may be circulated in the circulating loop to achieve a heat exchange between the cryogen and the outside after the cryogen cools the superconducting magnet. For example, when the cryogen cools the superconducting magnet, the cryogen may perform heat exchange with the superconducting magnet, and the cryogen may be converted from a liquid state to a gas state after absorbing heat from the superconducting magnet. The gas cryogen may flow to the compressor102via the inlet A1from the refrigeration device150via the outlet B2through the channel152. The compressor102may compress the gas cryogen from a relatively high temperature and a relatively low pressure to a relatively high temperature and pressure. Alternatively, the compressed gas cryogen may be cooled by the thermal management system100from the relatively high temperature and pressure to a relatively low temperature and a relatively high pressure. The gas cryogen with the relatively low temperature and the relatively high pressure may flow to the refrigeration device150via the inlet B1through the channel153and be converted into a liquid cryogen by the refrigeration device150. When the cryogen is compressed, the cryogen may generate a lot of heat and/or the compressor102may generate a lot of heat, which may result in an abnormal operation and/or malfunction of the compressor102. Heat dissipation of the compressor102may be achieved by the thermal management system100, thereby avoiding the abnormal operation and/or malfunction of the compressor102caused by overheat. More descriptions regarding the thermal management system104may be found elsewhere in the present disclosure (e.g.,FIGS.3-5and the descriptions thereof). In some embodiments, the apparatus100may further include a power supply device (e.g., a power supply device310as shown inFIGS.3-6). The power supply device may include an external power supply device, a power generation device, etc., configured to supply power for at least one of the compressor102, the thermal management system104, or the refrigeration device110. For example, the power supply device may be electronically connected with the compressor102via a power line155to provide power for the compressor102. As another example, the compressor102may be electronically connected with the refrigeration device104via a power line154. When the compressor102is powered, the compressor102may provide power for the refrigeration device104via the power line154. As still another example, the power supply device may be electronically connected with the refrigeration device150to provide power for the refrigeration device150. More descriptions regarding the power supply device may be found elsewhere in the present disclosure (e.g.,FIGS.3-5and the descriptions thereof). In some embodiments, the apparatus100may include a container that may be detachably mounted on a moveable platform (e.g., a vehicle). The cryostat, the compressor102, and/or the thermal management system100may be located in the container. More descriptions regarding the container may be found elsewhere in the present disclosure (e.g.,FIGS.3-5and the descriptions thereof). It should be noted that the above description of the apparatus100is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. FIG.2is a schematic diagram illustrating an exemplary apparatus for the transit of a superconducting device according to some embodiments of the present disclosure. The apparatus200may be configured to refrigerate the superconducting device (e.g., a medical device), e.g., during the transit of the medical device. For example, the apparatus200may accommodate the superconducting magnet and be located on a moving platform (e.g., a vehicle). The apparatus200may cool the superconducting magnet during the transmit of the superconducting magnet. As shown inFIG.2, the apparatus200may include a compressor202, a thermal management system204, a container206, an exhaust component208, etc. The compressor202may be configured to compress a cryogen after the cryogen cools the medical device. In some embodiments, the cryogen may be converted from a liquid state to a gas state after cooling the medical device. The compressor202may convert the gas cryogen with a relatively high temperature and a relatively low pressure to a compressed gas cryogen with a relatively high temperature and pressure. The compressed gas cryogen may be further converted to a compressed gas cryogen with a relatively low temperature and a relatively high pressure by being cooled by the thermal management system204. In some embodiments, the compressor202may be mounted at any suitable position in the container206. For example, the compressor202may be mounted at a position in the container close to a bottom of the container206. As another example, the compressor202may be mounted at a position close to a refrigeration device (e.g., the refrigeration device110as shown inFIG.1) in a cryostat which is used to accommodate the medical device. The thermal management system204may be in thermal communication with (e.g., cool) the compressor202. In some embodiments, the thermal management system204may include an air-cooled device. The air-cooled device may absorb heat generated by the compressor202using cool air entered from the outside of the container206. The air-cooled device may exhaust the heated air from the container206to the outside. For example, the air-cooled device may include at least one of a heat exchanger, an exhaust fan and a tube. The heat exchanger may include one or more heat sinks made of metal (e.g., copper or aluminum alloy) configured to absorb the heat generated by the compressor202. The absorbed heat may further be absorbed by the cool air. The heat exchanger may be mounted or integrated on a top of a shell of the compressor202. The exhaust fan may be configured to exhaust the heated air to the outside of the container206. The exhaust fan may be mounted above the heat exchanger and/or on a top of a shell of the container206. The tube may be configured to provide an air flowpath along which the heated air may flow from the heat exchanger to the exhaust fan. The compressor may be in thermal communication with the thermal communication system204via the air flowpath. More descriptions regarding the air-cooled device may be found elsewhere in the present disclosure (e.g.,FIGS.3-6and the descriptions thereof). The exhaust fan may be mounted above the heat exchanger and/or on a top of a shell of the container206. In some embodiments, the thermal management system204may include an air conditioner that is wall-mounted or floor mounted. The air conditioner may be thermally coupled to the compressor202, that is, the air conditioner may be thermally connected with the compressor202to absorb the heat generated by the compressor202. The container206may be configured to accommodate one or more components (e.g., the compressor202, and/or the thermal management system204) of the apparatus200. For example, at least one of the compressor202or the thermal management system204(e.g., both the compressor202and the thermal management system204) may be located in the container206. In some embodiments, the container206may be configured to accommodate the medical device (e.g., a cryostat accommodating a superconducting magnet). In some embodiments, the container206may include a first vent and a second vent. Air may flow from the outside of the container206into the container206through the second vent. The thermal management system204may absorb heat generated by the compressor202using the air. The thermal management system204may further exhaust the heated air from the container206through the first vent. The exhaust component208may be configured to exhaust the heated air after the air absorbs the heat generated by the compressor202. In some embodiments, the exhaust component208may further be configured for rain-protection, sun-protection, and dust-protection of the compressor202and/or the thermal management system204. For example, the exhaust component208may be mounted outside the container206and shelter at least the first vent of the container206, which may achieve rain-protection, sun-protection, and dust-protection of the compressor202and/or the thermal management system204. More descriptions regarding the exhaust component208may be found elsewhere in the present disclosure (e.g.,FIGS.3-7and the descriptions thereof). It should be noted that the above description of the apparatus200is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. In some embodiments, the apparatus200may not include the exhaust component208(i.e., the exhaust component208may be omitted). In some embodiments, the first vent may be set on the top of the container206, and accordingly, the exhaust component208may be set above the top of the container206to shelter the first vent. In some embodiments, the apparatus10may further include one or more additional components such as a power supply device, a damping device, and/or one or more lifting lugs. FIG.3is a schematic diagram illustrating an exemplary apparatus for the transit of a superconducting device according to some embodiments of the present disclosure.FIG.4is a schematic diagram illustrating a front view of the apparatus shown inFIG.3according to some embodiments of the present disclosure.FIG.5is a schematic diagram illustrating a left view of the apparatus shown inFIG.3according to some embodiments of the present disclosure.FIG.6is a schematic diagram illustrating a top view of the apparatus shown inFIG.3according to some embodiments of the present disclosure. The apparatus300may be similar or same as the apparatus200as illustrated inFIG.2. For example, the apparatus300may include a compressor302, a thermal management system304, and a container306, an exhaust component308that are similar to or same as the compressor202, the thermal management system204, the container206, and the exhaust component208, respectively as described inFIG.2. The apparatus300may further include a power supply device310, a damping device (also referred to as a damping platform)312, etc. The container306may be configured to accommodate and/or protect one or more components of the apparatus300and/or the medical device. For example, the compressor302, the thermal management system304, and the power supply device310may be located in the container306. In some embodiments, the container306may include a first compartment and a second compartment. At least one of the compressor302or the thermal management system304may be located in the first compartment. The power supply device310may be located in the second compartment. In some embodiments, the first compartment may be a relatively closed space3040in comparison with the second compartment. Merely by way of example, the second compartment may include an open structure such as a load-carrying framework. The load-carrying framework may include a first baseplate3061, a second baseplate3062, and a plurality of pillars3063located between the first baseplate3061and the second baseplate3062. The first baseplate3061may be located at a position higher than that where the second baseplate3062is located, such that the first baseplate3061and the second baseplate3062may also be referred to as a top baseplate and a bottom baseplate, respectively. Two ends of each of the plurality of pillars3063may be connected with the top baseplate3061and the bottom baseplate3062, respectively. The plurality of pillars3063may be configured to fix and/or support the top baseplate3061and the bottom baseplate3062. The top baseplate3061, the bottom baseplate3062and the plurality of pillars3063may form an open space3060to accommodate the power supply device310. In some embodiments, at least one of the plurality of pillars3063may be detachably connected with the top baseplate3061and the bottom baseplate3062. In some embodiments, when the at least one of the plurality of pillars3063is removed from the container206, the power supply device310may be placed in and/or removed from the open space3060of the second compartment. In some embodiments, the top baseplate3061may be physically connected with a top side of the first compartment, and/or the bottom baseplate3062may be physically connected with a bottom side of the first compartment. For example, the top baseplate3061may be flush with the top side of the first compartment, and the bottom baseplate3062may be flush with the bottom side of the first second compartment. In some embodiments, the top baseplate3061and the top side of the first compartment may be integrally formed. The bottom baseplate3062and the bottom side of the first compartment may be integrally formed. In some embodiments, the first compartment may be detachably connected with the second compartment. For example, the shell of the first compartment may be connected with the shell of the second compartment via one or more screws, rivets, bolts, hinges, or the like, or a combination thereof. In some embodiments, the first compartment and the second compartment may be spaced by a first baffle3064, i.e., the first compartment and the second compartment may share a sidewall. The shared sidewall may include a hole (e.g., facing the power supply device310) through which a maintenance personnel may enter the second compartment to repair and maintain the power supply device310. The container306may further include a second baffle3065configured to cover the hole on the shared sidewall (i.e., the first baffle3064). The second baffle3065may be detachably connected with the shared sidewall (i.e., the first baffle3064) by one or more connection components3066. Exemplary connection components3066may include a screw, a rivet, a bolt, a hinge, or the like, or any combination thereof. In some embodiments, a door3067may be set on a sidewall of the first compartment different from the shared wall (i.e., the first baffle3064). When the power supply device310involves a malfunction and/or requires periodic maintenance, the maintenance personnel may open the door3067of the first compartment and enter the first compartment. The maintenance personnel may enter the second compartment through the hole on the shared sidewall (i.e., the first baffle3064) by opening or removing the second baffle3065to repair and/or maintain the power supply device310. The compressor302may be configured to compress a cryogen after the cryogen cools the medical device. In some embodiments, during the transit of the superconducting device (e.g., a superconducting magnet of an MR scanner), the compressor302may cooperate with a refrigeration device (e.g., a cold head) to cool the medical device using the cryogen. For example, the cryogen may be heated after absorbing heat from the medical device. The heated cryogen may flow to the compressor302. The compressor302may operate continuously or discontinuously to compress the heated cryogen and the compressed cryogen may flow to the refrigeration device to be cooled, which may reduce the consumption of the cryogen. During the operation of the compressor302, the compressor302may generate heat. The heat generated by the compressor may be diffused in various directions from a thermovent (e.g., a vent for heat dissipation) of the compressor302, which may result in a temperature rise in the container306to cause abnormal operation of the compressor302. More description regarding the compressor302compressing the cryogen may be found elsewhere in the present disclosure (e.g.,FIGS.1and2and the descriptions thereof). The thermal management system304may be configured to in thermal communication with (e.g., cool) the compressor302and/or the compressed cryogen. For example, the thermal management system304may absorb heat generated by the compressor302using air. In some embodiments, the container306may include a first vent3068and a second vent3069. The air may flow into the container306from the outside through the second vent3069of the container206. The thermal management system304may exhaust the heated air from the container306through the first vent3068as shown inFIG.4of the container306. The first vent3068and the second vent3069may be located at different positions on the shell of the container206, i.e., the first vent3068and the second vent3069may be set in spaced apart, such that the air to be flowed in and the heated air to be exhausted may not be mixed with each other, thereby improving the efficiency of heat dissipation. For example, the first vent3068may be located on the top of the shell of the container306(e.g., being vertically aligned with the thermovent of the compressor302), and the second vent3069may be located on a sidewall of the shell of the container306. As another example, the first vent3068may face the thermal management system304in a vertical direction, such that the heated air may be exhausted by the heat exchanger substantially along a straight line, keeping an exhausted direction of the heated air unchanged, thereby improving the efficiency of heat dissipation. In some embodiments, the thermal management system304may include an air-cooled device including a heat exchanger (also be referred to as a heat exchange fan, a cooling blower, a cooling fan, etc.). In some embodiments, the air-cooled device may further include a tube3041, an exhaust fan3042, etc., as shown inFIG.4. The heat exchanger may be configured to absorb the heat generated by the compressor302. The absorbed heat may further be absorbed by air. The tube3041may be configured to guide the heated air to flow from the heat exchanger to the exhaust fan3042. The exhaust fan3042may be configured to exhaust the heated air from the container306through the first vent3068. In some embodiments, the tube3041may be in fluid communication with the heat exchanger and the exhaust fan3042. For example, the exhaust fan3042may be mounted on the shell of the container306and face the first vent3068which is on the top of the shell of the container306. An end of the tube3041may be connected with an end of the heat exchanger away from the compressor302, and the other end of the tube3041may be connected with the exhaust fan3042, such that the tube3041may guide the heated air to flow from the heat exchanger to the first vent3068for exhaustion. In some embodiments, the tube3041may be made of a soft material such as cloth, plastic, etc., to achieve a soft connection between the heat exchanger and the exhaust fan3042. In some embodiments, at least one of the two ends of the tube3041may be detachably connected with the heat exchanger and/or the exhaust fan3042. For example, when an outdoor temperature is relatively high (e.g., in Summer, Spring, or Autumn), the tube3041may be connected with the heat exchanger and the exhaust fan3042. Alternatively, when the outdoor temperature is relatively low (e.g., in Winter), the tube3041may be removed from the heat exchanger and the exhaust fan3042. In such cases, the container306may be maintained in a preset temperature range (e.g., 5° C.−38° C., 10° C.−30° C., or less than 40° C.), which reduces an effect of the outdoor temperature on the compressor302. By a setting of the thermal management system304(including the air-cooled device including the heat exchanger, the tube318041, and the exhaust fan3042), the heat generated by the compressor302may be absorbed by the air and exhausted from the container306to the outside of the container306, which prevents the heat from diffusing in all directions to result in a temperature rise in the container306. The temperature rise in the container306may be not conducive to control the temperature in the container206within the preset temperature range. The exhaust component308may be configured to exhaust heated air. The exhaust component308may further be configured for rain-protection, sun-protection, and/or dust-protection of the compressor302and/or the thermal management system304. In some embodiments, the exhaust component308may be mounted outside the shell of the container306. For example, the exhaust component308may be mounted above the first vent3068of the container306and cover the first vent3068. That is, the first vent3068may be within a projection region of the exhaust component308on the top of the shell of the container306. The heated air exhausted from the first vent3068may be further exhausted to the outside through the exhaust component308. In addition, the exhaust component308may prevent the rain and/or the dust from entering the container306to affect the operation of the compressor302and the thermal management system304. More descriptions regarding the exhaust component308may be found elsewhere in the present disclosure (e.g.,FIG.7and the descriptions thereof). The power supply device310may be configured to provide power for at least one of the compressor302, the thermal management system304, or a refrigeration device (e.g., the refrigeration device110as described inFIG.1) associated with the medical device. For example, the power supply device310may provide power for the compressor302during the transit of the medical device. The power supply device310may include a conventional power generation device, a new energy generation device, a storage battery, etc. Exemplary conventional power generation devices may include a diesel generation device, a turbine generation device, a gasoline generation device, or the like, or any combination thereof. Exemplary new energy generation devices may include a wind power generation device, a solar power generation device, a hydrogen power generation device, or the like, or any combination thereof. Exemplary storage batteries may include a lithium-ion battery, a lead storage battery, or the like, or any combination thereof. For illustration purposes, take the diesel generation device as an example, the diesel generation device may generate heat and soot during the operation of the diesel generation device. By placing the diesel generation device in the second compartment with an open structure, the heat and soot generated by the diesel generation device may be discharged to the outside of the container306with improved efficiency, which may not affect the operation of the compressor302and/or the thermal management system304. The damping device312may be configured to reduce the vibration of the compressor302during the transit of the medical device. One end of the damping device312may be connected with or fixed on the bottom of the container306, and the other end of the damping device312may be connected with the compressor302to support the compressor302. In some embodiments, the damping device312may include one or more damping components and a connection plate3122. The one or more damping components may include a damping spring made of a metal material, a damping rubber made of rubber, a damping gasbag filled with air, or the like, or any combination thereof. As shown inFIG.4, take one or more damping springs3121as an example, the connection plate3122may be mounted between the one or more damping springs3121and the compressor302to support the compressor302. An end of each of the one or more damping springs3121may be mounted on the bottom of the container306, and another end of each of the one or more damping springs3121may be connected with the connection plate3122. The one or more damping springs3121may be configured to offset or reduce the vibration of the compressor302by deformation. The larger a count or number of the one or more damping springs12is, the greater the damping effect of the damping device312may be. In some embodiments, the one or more damping springs3121may be arranged evenly. For example, the one or more damping springs3121may be arranged in a matrix. As another example, a count of the one or more damping springs3121may be 4. The 4 damping springs3121may be located at four corners of the connection plate3122. In some embodiments, the apparatus300may be loaded to and/or unloaded from a transportation tool (e.g., a truck). For example, the container306may include one or more lifting lugs3070on the top of the container306(e.g., on the top baseplate3061of the second compartment and the top of the shell of the first compartment as shown inFIGS.3-5). The apparatus300may be loaded to and/or unloaded from the transportation tool by lifting via the one or more lifting lugs3070. As another example, the container306may include one or more cross-bars3071on the bottom of the container306(e.g., on the bottom baseplate3062of the second compartment and the bottom of the shell of the first compartment as shown inFIGS.3and4). The apparatus300may be loaded to and/or unloaded from the transportation tool by a forklift inserting a gap between two adjacent cross-bars3071. Further, each of the one or more cross-bars3071may include one or more holes3072. Each of the one or more holes3072may include a regular shape (e.g., a circle, a square, a rectangular, or an oval) or an unregular shape. After the container306is loaded to the transportation tool, the container306may be fixed on the transportation tool based on the one or more holes3072and/or the one or more lifting lugs3070. In some embodiments, the apparatus300may include one or more sensors, positioning circuits, communication circuits, etc., for the acquisition and transmission of information for the transit of the medical device. The one or more sensors may include a temperature sensor, a power level sensor, a humidity sensor, a positioning module, a communication circuit, etc. For example, as shown inFIG.3, the apparatus300may include a temperature sensor316located in the container306(e.g., the first compartment of the container306). The temperature sensor316may be configured to detect the temperature in the container306(i.e., a temperature in the first compartment of the container306). In some embodiments, the temperature sensor316may be connected with the control device314(e.g., by an electronic connection). The temperature sensor316may send the detected temperature to the control device314for further processing. As another example, as shown inFIG.5, the apparatus300may include a power level sensor318, a positioning module319, a communication circuit320, etc. In some embodiments, the power level sensor318and/or the positioning module319may be connected with the communication circuit320(e.g., by an electronic connection). The power level sensor318may be configured to detect a power level of the power supply device310. Take the diesel generation device as an example, the power level sensor318may include an oil level sensor, and the power level may include an oil level. The oil level sensor may detect the oil level of the diesel generation device. The positioning module319may be configured to detect a geographic location of the apparatus300. In some embodiments, the positioning module319may achieve the location detection based on a Global Positioning System (GPS) principle, a global navigation satellite system (GLONASS) principle, a compass navigation system (COMPASS) principle, a BeiDou navigation satellite principle, a Galileo positioning principle, a quasi-zenith satellite system (QZSS) principle, a location-based service (LBS) principle (also referred to as a base station positioning principle), etc. The communication circuit320may be configured to facilitate a communication between components. For example, the communication circuit320may send the power level (e.g., the oil level) and/or the geographic location to a terminal. The terminal may display the power level and/or the geographic location for a user associated with the terminal. In some embodiments, the communication circuit320may include a long distance communication circuit such as a general packet radio service (GPRS). In some embodiments, a remaining distance to a destination for the transmit of the medical device may be determined based on the geographic location of the apparatus300(e.g., by the user or automatically). Whether the power level of the power supply device310satisfies a power requirement for the remaining distance may be determined based on the remaining distance (e.g., by the user or automatically). If the power level of the power supply device310doses not satisfy the power requirement for the remaining distance, a power required to be added (e.g., a diesel fuel required to be added) may be determined, such that the user (e.g., a driver in charge of the transit of the medical device) can timely charge the power supply device310(e.g., by adding at least the required diesel fuel), thereby achieving uninterrupted power supply to components (e.g., the compressor302) of the apparatus300during the transit process. In some embodiments, the apparatus300may further include a control device314. The control device314may be configured to control an operation of one or more components (e.g., the exhaust fan3042, the compressor302, or the power supply device310) in the apparatus300. In some embodiments, the control device314may acquire internal information, such as information associated with the one or more components for the transit of the medical device (e.g., a power level of the power supply device310, a geographic location of the apparatus300, etc.), a room temperature, external information (e.g., environment information, such as an outdoor temperature, weather, etc.) during the transit of the medical device. The control device314may control the operation of the one or more components based on the internal information and/or the external information. For example, the control device314may determine whether a temperature in the container306(e.g., in the first compartment of the container306) is higher than a threshold. The threshold may be associated with the preset temperature range that the container306requires to be maintained. For instance, if the preset temperature range is less than 40° C., the threshold may be denoted as an upper limiting value of the temperature range, i.e., 40° C. In response to a determination that the temperature in the first compartment is higher than the threshold, the control device314may cause the exhaust fan3042to operate. As another example, the control device314may determine whether a temperature in the container306(e.g., in the first compartment of the container206) is within the preset temperature range. In response to a determination that the temperature is within the preset temperature range, the control device314may cause the exhaust fan3042to stop. As still an example, the control device314may control, based on the temperature in the container306, a rotation speed of the exhaust fan3042to control the heat dissipation of the compressor302. The higher the rotation speed of the exhaust fan3042is, the higher the speed of exhausting the heated air generated by the compressor302and a speed of heat dissipation may be. As a further example, the control device314may determine whether the power level of the power supply device310satisfies a condition. In response to a determination that the power level satisfies the condition, the control device314may cause the power supply device310to stop. The condition may include that the power level is less than a threshold power. As still a further example, the control device314may determine whether a malfunction exists in the apparatus300. In response to a determination that a malfunction exists, the control device314may cause at least one of the power supply device310, the compressor302, the thermal management system304, etc. of the apparatus300to stop. As further another example, the control device314may be in communication with or connected with the power supply device310. The control device314may adjust parameters of the power supply device310(e.g., reduce and/or rectify a voltage generated by the power supply device310) to provide power for one or more components (e.g., the control device14, the exhaust fan3042, a temperature sensor316, a power level sensor318, a positioning module154, and/or a communication circuit320) of the apparatus300. As further another example, the control device314may cause the power supply device310to operate. After the power supply device310has been operated for a preset period, the control device314may cause the compressor102and the exhaust fan115to operate simultaneously, which ensures that the power supply device310provides power to the compressor102, the exhaust fan115, etc. after the power supply device310can output stabilized voltages. In some embodiments, the control device314may send the information to a terminal. The terminal may include a display component to display the received information for a user (e.g., a transport personnel in charge of the transit of the medical device, a control personnel or a manufacturer of the medical device) associated with the terminal. In some embodiments, the terminal may include a touch panel, an input component (e.g., a mouse, a microphone, and/or a keyboard), an output component (e.g., a speaker), or the like, or any combination thereof for the user to provide a feedback of the user in response to the displayed information. For instance, the display component may include a cathode-ray display, a led display, an electroluminescent display, an electronic paper, a plasma display panel, a liquid crystal display, an organic light-emitting semiconductor display, a surface conductive electron emission display, etc. In some embodiments, the control device314may include a processing device. The processing device may include a central processing unit (CPU), a digital signal processor (DSP), a system on a chip (SoC), a microcontroller unit (MCU), or the like, or any combination thereof. In some embodiments, the processing device120may include a computer, a user console, a single server or a server group, etc. The server group may be centralized or distributed. In some embodiments, the processing device may be local or remote. In some embodiments, the processing device may be implemented on a cloud platform. Merely by way of example, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an inter-cloud, a multi-cloud, or the like, or any combination thereof. In some embodiments, the control device314may be integrated into the terminal of the user. It should be noted that the above description of the apparatus300is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. In some embodiments, the container306may include a third compartment to accommodate the medical device. In some embodiments, the apparatus300may be a part of the medical device. In some embodiments, one or more components may be omitted in the apparatus300. In some embodiments, one or more additional components may be added in the apparatus to facilitate the cooling performance of the apparatus300. FIG.7is a schematic diagram illustrating an exemplary exhaust component according to some embodiments of the present disclosure. As shown inFIG.7, the exhaust component308may include a first part701and a second part702connected with the first part701. In connection withFIGS.3-6, an end of the first part701far away from the second part702may be connected with the exhaust fan3042of the thermal management system304, that is, the first part701may be located between the second part702and the exhaust fan3042. The first part701and/or the second part702may be made of a waterproof material, a sunscreen material, a heat resistance material, or the like, or any combination thereof. For example, the first part701may be made of stainless steel, and the second part702may be made of oil cloth. As another example, both the first part701and the second part702may be made of stainless steel. As still an example, the first part701may be made of aluminum alloy, and the second part702may be made of stainless steel. In some embodiments, the first part701may be hollow and configured with at least one hole703. The first part may also be referred to as a hollow part. For illustration purposes, the first part701may include a hollow cylinder as shown inFIG.7, which is not intended to limit the structure of the first part701, and the first part701may include any suitable hollow structure. Each of the at least one hole703may include a regular shape (e.g., a circle, a square, a rectangle, or an oval) or an irregular shape. A size of the at least one hole703may be the same as or different from each other. The at least one hole703may be arranged evenly or randomly on the sidewall of the first part701. After the heated air is guided from the heat exchanger of the thermal management system304to the first vent3068through the tube3041and the exhaust fan3042, the heated air may further be exhausted to the outside through the at least one hole703of the first part701. The greater a count of the at least one hole703is and/or the larger the size of the at least one hole703is, the better effect of exhausting the heated air may be. In some embodiments, the second part702may be configured to shelter the first vent3068, avoiding foreign matters (e.g., the rain, the dust, etc.) falling down from the first vent3068on the top of the shell of the container306to the container306. The second part702may include any suitable shape or structure as long as the second part702can shelter the first vent3068and/or the first part701, e.g., the first vent3068being within a projection region of the second part702on the container306. The second part702may also be referred to as a shelter part. For example, the second part702may have a circle shape, a square shape, etc. As another example, the second part702may include a structure (e.g., a cylinder-shaped structure, a paraboloid-shaped structure) that can cover the first part701. In some embodiments, the wall of the second part702(e.g., a cylinder-shaped structure) may enclose the first part701. The second part702may not contact with the top of the shell of the container306, which ensures the shelter effect of the second part702and the exhaust effect of the at least one hole703of the first part701. It should be noted that the above description of the exhaust component308is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. In some embodiments, the exhaust component308may include one or more additional parts to facilitate the shelter effect and/or the exhaust effect. FIG.8is a schematic diagram illustrating an exemplary system for the transit of a superconducting device according to some embodiments of the present disclosure. The system may be associated with a manned and/or unmanned transportation tool such as a vehicle, a ship, an airplane, etc. for the transit of the superconducting device (e.g., a medical device). As shown inFIG.8, the system800may include a driving cab S1, a movable platform S2, a superconducting magnet S3, and an apparatus S4. The movable platform S2may be physically connected with the driving cab S1. In some embodiments, one or more signal transmission lines, one or more tubes, or the like, or any combination may be set between the driving cab S1, the movable platform S2, the superconducting magnet S3, and/or the apparatus S4to, e.g., facilitate communications thereof. The driving cab S1may be configured to control the transit of the medical device. In some embodiments, a transport personnel (e.g., a driver in charge of the transit of the medical device) may perform operations (e.g., operate a transportation tool associated with the driving cab S1) in the driving cab S1to finalize the transit of the medical device. In some embodiments, the driving cab S1may be configured with a terminal which is similar to or same as a terminal as described elsewhere in the present disclosure (e.g.,FIGS.3-6and the descriptions thereof). The transport personnel may be informed of information associated with the apparatus S4via the terminal during the transit of the superconducting magnet S3. The movable platform S2may be configured to support at least one of the superconducting magnet S3and the apparatus S4. The superconducting magnet S3and the apparatus S4may be moved as the movable platform S2moves. For example, the superconducting magnet S3may be accommodated in a cryostat including a refrigeration device. The cryostat may be placed on the movable platform S2. As another example, the apparatus S4may include a container as described elsewhere in the present disclosure (e.g.,FIGS.3-6and the descriptions thereof). The container may be placed on the movable platform S2. In some embodiments, the movable platform S2may include different forms (e.g., a box structure or an open structure) according to different types of the transportation tools. For example, the movable platform S2may include a carriage of a truck, a warehouse of a ship, a cabin of an airplane, etc. The superconducting magnet S3may be implemented on a medical device (e.g., an MR scanner). The MR scanner may further include a cryostat that includes a refrigeration device, etc. In some embodiments, the cryostat may be filled with a cooling medium (e.g., a cryogen such as liquid helium or a hyperpolarized material) in an accommodation space of the cryostat to submerge the superconducting magnet S3. In some embodiments, the cryostat may include an inner vessel, an outer vessel, and one or more seal heads. The inner vessel and the outer vessel may have shapes of cylinders and be arranged coaxially. Two seal heads may be located at two ends of the cryostat along an axis direction of the cryostat, respectively. The inner vessel or the outer vessel may include an inner layer, a middle layer, and an outer layer from the inside to the outside. The middle layer may be configured for thermal shielding. The inner layer and the outside layer may form a vacuum. In some embodiments, the superconducting magnet S3may include a main magnet that can generate a main magnet filed for the MR scanner to perform a scan. The main magnet may include a frame and one or more superconducting coils wound on the frame. The superconducting coils may be thermally coupled to the cooling medium. In some embodiments, the refrigeration device may be similar to or same as the refrigeration device110as described elsewhere in the present disclosure. The refrigeration device may include a cold head assembly. The cold head assembly may include a first cold head and a second cold head with different relatively low temperatures. The first cold head may be located between the outer layer and the middle layer of the cryostat and be configured to keep the cooling medium surrounding the first cold head in a first temperature ranging from 40 K-70 K. The second cold head may extend from the middle layer to the inner layer of the cryostat and be configured to keep the cooling medium surrounding the second cold head in a second temperature below 10 K. For example, the first temperature may be set to be 50 K, and the second temperature may be set to be 4.2 K. More descriptions regarding the superconducting magnet S3may be found elsewhere in the present disclosure (e.g.,FIGS.9and10and the descriptions thereof). The apparatus S4may be configured to refrigerate the superconducting magnet S3during the transit of the medical device. The apparatus S4may be the same as or similar to the apparatus100, the apparatus200, and/or the apparatus300as described elsewhere in the present disclosure. For example, the apparatus S4may include a container (e.g., the same as or similar to the container306), a compressor (e.g., the same as or similar to the compressor302), a thermal management system (e.g., the same as or similar to the thermal management system304), a power supply (e.g., the same as or similar to the power supply310), an exhaust component (e.g., the same as or similar to the exhaust component308), a damping device (e.g., the same as or similar to the damping device312), one or more sensors, a control device, or the like, or any combination thereof. In some embodiments, the container may include a first compartment and a second compartment sharing a sidewall with the first compartment. The first compartment may be configured to accommodate at least one of the compressor, the thermal management system, or the exhaust component. The second compartment may be configured to accommodate the power supply device. In some embodiments, the second compartment may include a structure of a load-carrying framework. In some embodiments, the compressor may be configured to compress a cryogen after the cryogen cools the superconducting magnet S3in the transit of the superconducting magnet S3. For example, the compressor may be in fluid communication with the refrigeration device (e.g., a cold head assembly). The refrigeration device may be configured to supply refrigeration of the superconducting magnet S3using the compressed cryogen. In some embodiments, the thermal management system may be configured to cool the compressor and/or the compressed cryogen. The compressed and/or cooled cryogen may be used to cool the superconducting magnet S3. For example, the thermal management system may include an air-cooled device including a heat exchanger, an exhaust fan mounted on a shell of the container, and a tube in fluid communication with the heat exchanger and the exhaust fan. The heat exchanger may be configured to absorb heat generated by the compressor using air which enters from the outside via a first vent of the container. The tube may be configured to guide the heated air to flow from the heat exchanger to the exhaust fan. The exhaust fan may be configured to exhaust the heated air from the container via a second vent of the container. In some embodiments, the power supply device may be configured to provide power to at least one of the compressor, the thermal management system, or the refrigeration device during the transit of the superconducting magnet S3. The power supply device may electronically be connected with at least one of the compressor, the thermal management system, or the refrigeration device. In some embodiments, the exhaust component may be located outside the container. The exhaust component may include a first part connected with the exhaust fan through the first vent and a second part configured to shelter the first vent and connected with the first part, wherein the first vent is within a projection region of the second part on the container. The first part may be hollow and be configured with at least one hole. In some embodiments, the damping device may be mounted in the container. The damping device may include one or more damping springs and a connection plate mounted between the one or more damping springs and the compressor. An end of each of the one or more damping springs may be mounted on a bottom of the container. The connection plate may be configured to support the compressor. In some embodiments, the one or more sensors may include a power level sensor (e.g., the same as or similar to the power level sensor318), a temperature sensor (e.g., the same as or similar to the temperature sensor316), etc. For example, the power level sensor may be configured to detect a power level of the power supply device and send the detected power level to the driving cab S1for display. As another example, the temperature sensor may be configured to detect a room temperature in the container and send the detected room temperature to the driving cab S1. In some embodiments, the control device may be configured to control an operation of each of one or more components in the apparatus S4. For example, the control device may obtain a power level of the power level supply and send the power level to a terminal. As another example, the control device may obtain a room temperature in the container and control a rotation speed of the exhaust fan based on the room temperature. As still another example, the control device may stop at least one of the compressor, the power supply device, or the thermal management system in response to determining that a malfunction exists. In some embodiments, the control device may be integrated into the driving cab S1or the terminal. It should be noted that the above description of the system800is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. In some embodiments, the system800may include one or more additional components. In some embodiments, one or more components in the system800may be omitted. FIG.9is a schematic diagram illustrating an exemplary superconducting magnet of an MR scanner according to some embodiments of the present disclosure. In some embodiments, the superconducting magnet900may be accommodated in a cryostat. The cryostat may include an inner vessel7, an outer vessel8, and two seal heads9. The inner vessel7and the outer cylinder8may have shapes of cylinders and be arranged coaxially. Two seal heads9may be located at two ends of the cryostat along a direction of the axis of the t cryostat respectively. Each of the two seal heads9may be used to connect the inner vessel7with the outer vessel8at one of the two ends of the cryostat. In some embodiments, the cryostat may be filled with a cryogen (e.g. liquid helium) in a relatively low temperature. In some embodiments, the cryostat may be refrigerated based on a compressor and a refrigeration device during the transmit of the superconducting magnet as described elsewhere in the present disclosure (e.g.,FIGS.1-6and the descriptions thereof). The refrigerated cryostat may keep the superconducting magnet in the relatively low temperature. As shown inFIG.9, the superconducting magnet900may include one or more magnetic coils. The magnetic coil(s) may include one or more first coils (also referred to as main coils)1, one or more second coils (also referred to as shielding coils)2, and one or more third coils (also referred to as shim coils)3. The first coil(s)1may consist of a plurality of coils connected in series. The plurality of coils may be made of superconducting materials. The one or more first coils1may be configured to generate a main magnetic field. The one or more second coils2may be configured to constrain a clutter magnetic field. The one or more third coils3may be configured for an active shimming, e.g., generating a corrective magnetic field. The first coil(s)1may be submerged in the cryogen to maintain a low temperature superconducting state with a zero resistance. Then, the first coil(s)1may be electrified to generate the main magnetic field with a relatively high strength. In some embodiments, the superconducting magnet900may include a first bracket (also referred to as a main coil frame)4and a second bracket (also referred to as a shielding coil frame). The second bracket may include an outer frame5relatively close to the second coil(s)2and an inner frame6relatively far away from the second coil(s)2. The first coil(s)1may be wound on the main coil frame4. The second coil(s)2may be wound on the outer frame5. The third coil(s)3may be wound on the inner frame6. In some embodiments, the inner frame6may be connected with (e.g., bonded to) an outer surface of the first coil4. The outer frame5may be connected with an inner wall of the outer vessel8. Therefore, the second coil(s)2may be wound on the outer frame5along a circumferential direction of the cryostat (e.g., a circumferential direction of the inner vessel7or the outer vessel8), and the third coil(s)3may be wound on the circumferential direction of the cryostat. In some embodiments, an outer surface of the inner vessel7may be connected with the first frame4, such that the first coil(s)1may be wound on the first frame4along the circumferential direction of the cryostat (e.g., a circumferential direction of the inner vessel7or the outer vessel8). FIG.10is a schematic diagram illustrating an exemplary shielding coil frame according to some embodiments of the present disclosure. In some embodiments, the shielding coil frame may be an enlarged view of the shielding coil frame as shown inFIG.9. As shown inFIG.10, the shielding coil frame may include one or more support bars56between the outer frame5and the inner frame6for connecting the outer frame5and the inner frame6and strengthening structural strength of the cryostat. In some embodiments, an outer surface of the inner frame6may include a groove31extending along the circumferential direction of the cryostat, such that the third coil(s)3may be wound on the inner frame6along the groove31. In some embodiments, the third coil(s)3may be wound on the inner frame6along the groove31using a dry winding mode. After the winding, the third coil(s)3may be fixed in the groove31. Merely by way of example, the third coil(s)3may be fixed by filling the groove31with a glue (e.g., an epoxy resin) in a vacuum state. In some embodiments, the third coil(s)3may be made of a superconducting material. For example, the third coil(s)3may include a non-ideal second-class superconductor such as a NbTi copper-based superconducting coil. In some embodiments, the third coil(s)3may include two groups each of which is close to one seal head of the two seal heads9at two ends of the cryostat along the axis direction. The two groups of the third coils3may be connected in series. Having thus described the basic concepts, it may be rather apparent to those skilled in the art after reading this detailed disclosure that the foregoing detailed disclosure is intended to be presented by way of example only and is not limiting. Various alterations, improvements, and modifications may occur and are intended to those skilled in the art, though not expressly stated herein. These alterations, improvements, and modifications are intended to be suggested by this disclosure and are within the spirit and scope of the exemplary embodiments of this disclosure. Moreover, certain terminology has been used to describe embodiments of the present disclosure. For example, the terms “one embodiment,” “an embodiment,” and/or “some embodiments” mean that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Therefore, it is emphasized and should be appreciated that two or more references to “an embodiment” or “one embodiment” or “an alternative embodiment” in various portions of this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures or characteristics may be combined as suitable in one or more embodiments of the present disclosure. Further, it will be appreciated by one skilled in the art, aspects of the present disclosure may be illustrated and described herein in any of a number of patentable classes or context including any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof. Accordingly, aspects of the present disclosure may be implemented entirely hardware, entirely software (including firmware, resident software, micro-code, etc.) or combining software and hardware implementation that may all generally be referred to herein as a “unit,” “module,” or “system.” Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer-readable media having computer-readable program code embodied thereon. A non-transitory computer-readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including electromagnetic, optical, or the like, or any suitable combination thereof. A computer-readable signal medium may be any computer-readable medium that is not a computer-readable storage medium and that may communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer-readable signal medium may be transmitted using any appropriate medium, including wireless, wireline, optical fiber cable, RF, or the like, or any suitable combination of the foregoing. Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object-oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C++, C #, VB. NET, Python or the like, conventional procedural programming languages, such as the “C” programming language, Visual Basic, Fortran, Perl, COBOL, PHP, ABAP, dynamic programming languages such as Python, Ruby, and Groovy, or other programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider) or in a cloud computing environment or offered as a service such as a Software as a Service (SaaS). Furthermore, the recited order of processing elements or sequences, or the use of numbers, letters, or other designations, therefore, is not intended to limit the claimed processes and methods to any order except as may be specified in the claims. Although the above disclosure discusses through various examples what is currently considered to be a variety of useful embodiments of the disclosure, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover modifications and equivalent arrangements that are within the spirit and scope of the disclosed embodiments. For example, although the implementation of various components described above may be embodied in a hardware device, it may also be implemented as a software-only solution, e.g., an installation on an existing server or mobile device. Similarly, it should be appreciated that in the foregoing description of embodiments of the present disclosure, various features are sometimes grouped together in a single embodiment, figure, or description thereof to streamline the disclosure aiding in the understanding of one or more of the various inventive embodiments. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed object matter requires more features than are expressly recited in each claim. Rather, inventive embodiments lie in less than all features of a single foregoing disclosed embodiment. In some embodiments, the numbers expressing quantities, properties, and so forth, used to describe and claim certain embodiments of the application are to be understood as being modified in some instances by the term “about,” “approximate,” or “substantially.” For example, “about,” “approximate” or “substantially” may indicate ±20% variation of the value it describes, unless otherwise stated. Accordingly, in some embodiments, the numerical parameters set forth in the written description and attached claims are approximations that may vary depending upon the desired properties sought to be obtained by a particular embodiment. In some embodiments, the numerical parameters should be construed in light of the number of reported significant digits and by applying ordinary rounding techniques. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of some embodiments of the application are approximations, the numerical values set forth in the specific examples are reported as precisely as practicable. Each of the patents, patent applications, publications of patent applications, and other material, such as articles, books, specifications, publications, documents, things, and/or the like, referenced herein is hereby incorporated herein by this reference in its entirety for all purposes, excepting any prosecution file history associated with same, any of same that is inconsistent with or in conflict with the present document, or any of same that may have a limiting effect as to the broadest scope of the claims now or later associated with the present document. By way of example, should there be any inconsistency or conflict between the description, definition, and/or the use of a term associated with any of the incorporated material and that associated with the present document, the description, definition, and/or the use of the term in the present document shall prevail. In closing, it is to be understood that the embodiments of the application disclosed herein are illustrative of the principles of the embodiments of the application. Other modifications that may be employed may be within the scope of the application. Thus, by way of example, but not of limitation, alternative configurations of the embodiments of the application may be utilized in accordance with the teachings herein. Accordingly, embodiments of the present application are not limited to that precisely as shown and described.
74,016
11860254
DETAILED DESCRIPTION The present disclosure describes systems and methods for rapid magnetic field shutdown and recharging in a magnetic resonance imaging (“MRI”) system that includes a superconducting magnet cooled by a mechanical cryocooler. Recently, there have been advances in superconductors and superconducting magnet design aimed at reducing the amount of expensive liquid cryogen required to achieve and maintain superconducting properties. These advances include the development of high temperature superconductors that are conductors that become superconducting at temperatures higher than 4 K. Currently, reasonable high temperature superconductors can operate at 10 K; although, some materials can demonstrate superconducting properties at temperatures as high as 30 K. Furthermore, there have been recent proposals on cryogen-free magnet designs that use a cryocooler to cool the magnet coil conductors through thermal contact rather than immersing the magnet coils within a liquid helium bath. The systems and methods described here are based on such a cryogen free superconducting magnet design using traditional, or high temperature, superconductors where the main magnetic field can be turned off in a short amount of time. For instance, the magnetic field can be turned off in an amount of time comparable to a typical amount of time a traditional “quench” would take, e.g., less than 10 seconds. The MRI system described here uses a mechanical cryocooler (or cold head) that is in thermal contact with the conductors in a superconducting magnet to cool them to temperatures approaching 4 K. Here, thermal contact can include direct or indirect contact, through which thermal energy can be transferred or conducted. The superconducting material used for the magnet design preferably maintains superconducting properties up to temperatures approaching 8 K. In the described system, current density can be removed from the conductive windings of the magnet coils in a rapid manner by introducing one or a combination of a power supply source, a resistive load and an external energy source. In one embodiment, a power supply source introduced into the circuit (e.g., by means of a superconducting switch) may be used to supply current to the magnet coils. Supplying current to the magnet coils introduces heat into the system, which can be removed using the thermal cooling capacity of the mechanical cryocooler (or cold head). In another embodiment, a resistive load with a large thermal mass may be introduced into the circuit (e.g., by means of a superconducting switch) and the majority of the energy stored in the superconducting magnet may be dissipated to this load rather than the magnet coils of the superconducting magnet during a rapid shutdown (or ramp down) to turn off the magnetic field. In yet another embodiment, an external energy storage device may be introduced into the circuit (e.g., by means of a superconducting switch) and may be used to store all, or part of the energy contained within the superconducting magnet coils that is dissipated during a rapid shutdown (or ramp down) to turn off the magnetic field. As mentioned, in other embodiments, combinations of the power supply source, resistive load and external energy storage device may be used for rapid shutdown. In addition, one or a combination of the power supply source, the resistive load and the external energy storage device may be used to recharge the magnet coils after a rapid shutdown. In this system, the rate of energy exchange change (and thus the rate of magnetic field change) can be controlled so that the temperature of the conductor does not exceed a predetermined threshold that could potentially cause irreversible damage. For example, the predetermined threshold may be the superconducting transition point of the magnet coil material. In this manner, there is no rapid resistance changes in the conductor to cause an uncontrolled loss of magnetic field (i.e., a quench). In another example, the predetermined threshold may be a larger temperature than the superconducting transition point, for example, 20 K, so long as the temperature a) doesn't cause significant damage to the wire or magnet structure; and b) doesn't require a significant amount of time to cool back down to superconducting temperature (˜4-5 K). Referring now toFIG.1, a magnetic resonance imaging system10generally includes a magnet assembly12for providing a magnetic field14that is substantially uniform within a bore16that may hold a subject18or other object to be imaged. The magnet assembly12supports a radio frequency (“RF”) coil (not shown) that may provide an RF excitation to nuclear spins in the object or subject (not shown) positioned within the bore16. The RF coil communicates with an RF system20producing the necessary electrical waveforms, as is understood in the art. The magnet assembly12also supports three axes of gradient coils (not shown) of a type known in the art, and which communicate with a corresponding gradient system22providing electrical power to the gradient coils to produce magnetic field gradients, Gx, Gy, and Gzover time. A data acquisition system24connects to RF reception coils (not shown) that are supported within the magnet assembly12or positioned within bore16. The RF system20, gradient system22, and data acquisition system24each communicates with a controller26that generates pulse sequences that include RF pulses from the RF system20and gradient pulses from gradient system22. The data acquisition system24receives magnetic resonance signals from the RF system20and provides the magnetic resonance signals to a data processing system28, which operates to process the magnetic resonance signals and to reconstruct images therefrom. The reconstructed images can be provided to a display30for display to a user. The magnet assembly12includes one or more magnet coils32housed in a vacuum housing34, which generally provides a cryostat for the magnet coils32, and mechanically cooled by a mechanical cryocooler36, such as a Gifford-McMahon (“GM”) cryocooler or a pulse tube cryocooler. In one example configuration, the cryocooler can be a Model RDK-305 Gifford-McMahon cryocooler manufactured by Sumitomo Heavy Industries (Japan). In general, the cryocooler36is in thermal contact with the magnet coils32and is operable to lower the temperature of the magnet coils32and to maintain the magnet coils32and a desired operating temperature. In some embodiments the cryocooler36includes a first stage in thermal contact with the vacuum housing34and a second stage in thermal contact with the magnet coils32. In these embodiments, the first stage of the cryocooler36maintains the vacuum housing34at a first temperature and the second stage of the cryocooler36maintains the magnet coils32at a second temperature that is lower than the first temperature. The magnet coils32are composed of a superconducting material and therefore provide a superconducting magnet. The superconducting material is preferably selected to be a material with a suitable critical temperature such that the magnet coils32are capable of achieving desired magnetic field strengths over a range of suitable temperatures. As one example, the superconducting material can be niobium (“Nb”), which has a transition temperature of about 9.2 K. As another example, the superconducting material can be niobium-titanium (“NbTi”), which has a transition temperature of about 10 K. As still another example, the superconducting material can be triniobium-tin (“Nb3Sn”), which has a transition temperature of about 18.3 K. The choice of superconducting material will define the range of magnetic field strengths achievable with the magnet assembly12. Preferably, the superconducting material is chosen such that magnetic field strengths up to about 3.0 T can be achieved over a range of temperatures that can be suitably achieved by the cryocooler36. In some configurations, however, the superconducting material can be chosen to provide magnetic field strengths higher than 3.0 T. The cryocooler36is operable to maintain the magnet coils32at an operational temperature at which the magnet coils32are superconducting, such as a temperature that is below the transition, or critical, temperature for the material of which the magnet coils32are composed. As one example, a lower operational temperature limit can be about 4 K and an upper operational temperature limit can be at or near the transition, or critical, temperature of the superconducting material of which the magnet coils32are composed. The current density in the magnet coils32in the MRI system10may be controllable to rapidly ramp up or ramp down the magnetic field14generated by the magnet assembly12while controlling the temperature of the magnet coils32with the cryocooler36to keep the temperature below the transition temperature of the superconducting material of which the magnet coils32are composed. As one example, the magnetic field14can be ramped up or ramped down on the order of minutes, such as fifteen minutes or less. In general, the current density in the magnet coils32can be increased or decreased by connecting the magnet coils32to a circuit with a power supply38that is in electrical communication with the magnet coils32via a switch40and operating the power supply38to increase or decrease the current in the connected circuit. The switch40is generally a superconducting switch that is operable between a first, closed, state and a second, open, state. When the switch40is in its open state, the magnet coils32are in a closed circuit, which is sometimes referred to as a “persistent mode.” In this configuration, the magnet coils32are in a superconducting state so long as the temperature of the magnet coils32is maintained at a temperature at or below the transition temperature of the superconducting material of which they are composed. When the switch40is in the closed state, however, the magnet coils32and the power supply38can be placed in a connected circuit, and the current supplied by the power supply38and the current in the magnet coils32will try to equalize. For instance, if the power supply38is operated to supply more current to the connected circuit, the current in the magnet coils32will increase, which will increase the strength of the magnetic field14. On the other hand, if the power supply38is operated to decrease the current in the connected circuit, the current in the magnet coils32will decrease, which will decrease the strength of the magnetic field14. It will be appreciated by those skilled in the art that any suitable superconducting switch can be used for selectively connecting the magnet coils32and power supply38into a connected circuit; however, as one non-limiting example, the switch40may include a length of superconducting wire that is connected in parallel to the magnet coils32and the power supply38. To operate such a switch40into its closed state, a heater in thermal contact with the switch40is operated to raise the temperature of the superconducting wire above its transition temperature, which in turn makes the wire highly resistive compared to the inductive impedance of the magnet coils32. As a result, very little current will flow through the switch40. The power supply38can then be placed into a connected circuit with the magnet coils32. When in this connected circuit, the current in the power supply38and the magnet coils32will try to equalize; thus, by adjusting the current supplied by the power supply38, the current density in the magnet coils32can be increased or decreased to respectively ramp up or ramp down the magnetic field14. To operate the switch40into its open state, the superconducting wire in the switch40is cooled below its transition temperature, which places the magnet coils32back into a closed circuit, thereby disconnecting the power supply38and allowing all of the current to flow through the magnet coils32. When the magnet coils32are in the connected circuit with the power supply38, the temperature of the magnet coils32will increase as the current in the connected circuit equalizes. Thus, the temperature of the magnet coils32should be monitored to ensure that the temperature of the magnet coils32remains below the transition temperature for the superconducting material of which they are composed. Because placing the magnet coils32into a connected circuit with the power supply38will tend to increase the temperature of the magnet coils32, the rate at which the magnetic field14can be ramped up or ramped down will depend in part on the cooling capacity of the cryocooler36. For instance, a cryocooler with a larger cooling capacity will be able to more rapidly remove heat from the magnet coils32while they are in a connected circuit with the power supply38. The power supply38and the switch40operate under control from the controller26to provide current to the magnet coils32when the power supply38is in a connected circuit with the magnet coils32. A current monitor42measures the current flowing to the magnet coils32from the power supply38, and a measure of the current can be provided to the controller26to control the ramping up or ramping down of the magnetic field14. In some configurations, the current monitor42is integrated into the power supply38. A temperature monitor44is in thermal contact with the magnet assembly12and operates to measure a temperature of the magnet coils32in real-time. As one example, the temperature monitor44can include a thermocouple temperature sensor, a diode temperature sensor (e.g., a silicon diode or a GaAlAs diode), a resistance temperature detector (“RTD”), a capacitive temperature sensor, and so on. RTD-based temperature sensors can be composed of ceramic oxynitride, germanium, or ruthenium oxide. The temperature of the magnet coils32is monitored and can be provided to the controller26to control the ramping up or ramping down of the magnetic field14. In operation, the controller26is programmed to ramp up or ramp down the magnetic field14of the magnet assembly12in response to instructions from a user. As mentioned above, the magnetic field14can be ramped down by decreasing the current density in the magnet coils32by supplying current to the magnet coils32from the power supply38via the switch40, which is controlled by the controller26. Likewise, the strength of the magnetic field14can be ramped up by increasing the current density in the magnet coils32by supplying current to the magnet coils32from the power supply38via the switch40, which is controlled by the controller26. The controller26is also programmed to monitor various operational parameter values associated with the MRI system10before, during, and after ramping the magnetic field14up or down. As one example, as mentioned above, the controller26can monitor the current supplied to the magnet coils32by the power supply38via data received from the current monitor42. As another example, as mentioned above, the controller26can monitor the temperature of the magnet coils32via data received from the temperature monitor44. As still another example, the controller26can monitor the strength of the magnetic field14, such as by receiving data from a magnetic field sensor, such as a Hall probe or the like, positioned in or proximate to the bore16of the magnet assembly12. As mentioned above, certain conditions or situations may require that the magnetic field14of the magnet assembly12be shut down (or turned off) rapidly. For example, an emergency situation may be created by a large metallic object being attracted by the strong magnetic field of the magnet assembly12. In one embodiment, the power supply source38may also be used to rapidly shutdown the magnetic field14of the magnet assembly12in response to a shutdown condition. As discussed above, the power supply source38may be connected to the magnet coils32and operated to remove or decrease the current in the magnet coils32. The cryocooler36may be used to remove heat generated by the magnet coils32as the current in the magnet coils32decreases. In an embodiment, the temperature monitor44may be used to measure a temperature of the magnet coils32in real-time. The controller26may be configured to rapidly shutdown (or turn off) the magnet field14of the magnet assembly12in response to instructions from a user. The user may provide instructions to the controller based on the presence of a shutdown condition. In another embodiment, a rapid shutdown (e.g., an emergency shutdown) of the magnet field of the magnet assembly12may be performed using an energy storage device46that is coupled to the magnet coils32and the controller26. In one embodiment, the energy storage device may be an inductive load. For example, the inductive load may be a second superconducting system. The second superconducting system may be thermally coupled to the cryocooler36of MRI system10and cooled by the cryocooler36. In another embodiment, the energy storage device46may be a battery. The energy storage device46may be coupled to the magnet coils32using a superconducting switch50. The superconducting switch50may be controlled using, for example, controller26to selectively connect the energy storage device46and the magnet coils32into a connected circuit. In an embodiment, the superconducting switch50may be any suitable superconducting switch that can be used for selectively connecting the magnet coils32and energy storage device46into a connected circuit. For example, the superconducting switch50may be switched between an open state and a closed state as described in the non-limiting example mentioned above. The energy storage device46may be used to store all, or a part of, the energy contained in the magnet coils32so that the current density is removed from the magnet coils32and the magnetic field14turned off. In other words, the energy from the magnet coils32may be dissipated into the energy storage device46during the rapid shutdown of the magnetic field46. In an embodiment, the magnetic field14may be turned off in a short amount of time, for example, in an amount of time comparable to a typicality amount of time a transitional “quench” would take (e.g., less than 10 seconds). The controller26may be configured to rapidly shutdown (or turn off) the magnet field14of the magnet assembly12in response to instructions from a user. The user may provide instructions to the controller26based on the presence of a shutdown condition. As mentioned above, the rate of energy exchange change (and thus the rate of magnetic field change) can be controlled so that the temperature of the conductor (magnet coils32) does not exceed a predetermined threshold that could potentially cause irreversible damage. For example, the predetermined threshold may be the superconducting transition point of the magnet coil32material. In another example, the predetermined threshold may be a larger temperature than the superconducting transition point, for example, 20 K. In an embodiment, the temperature monitor44may be used to measure a temperature of the magnet coils32in real-time. The temperature of the magnet coils32may be monitored and the temperature may be provided to the controller26to control the raid shutdown of the magnetic field14. After the magnetic field14has been shut down (or turned off), the condition(s) that led to the need for the rapid shutdown may be resolved. Once the rapid shutdown condition has been resolved, the energy stored in the energy storage device46from the shutdown of the magnetic field14may be used to fully, or partially, recharge the magnet coils32. The controller26may be configured to recharge the magnet coils32using the energy stored in the energy storage device46from the shutdown of the magnetic field14in response to instructions from a user. For example, the energy storage device46and the superconducting switch50may operate under control from the controller26to provide the energy stored in the energy storage device46to the magnet coils32when the energy storage device46is in a connected circuit with the magnet coils32. FIG.2illustrates a method for rapid shutdown and recharge of a superconducting magnet in accordance with an embodiment. At block202, energy from a set of magnet coils in a magnet assembly of an MRI system is dissipated to an energy storage device coupled to the magnet coils. The energy is dissipated based on a rapid shutdown condition, for example, the present of a large metallic object that is attracted by the strong magnetic field of the MRI system. In an embodiment, a user may provide instructions to the MRI system to rapidly shutdown of the magnetic field of the magnet assembly. In one example, a superconducting switch may be used to connect the energy storage device to the magnet coils. During the shutdown of the magnetic field, current density is removed from the magnet coils and the energy dissipated to the energy storage device. In an embodiment, the magnetic field may be turned off in a short amount of time, for example, in an amount of time comparable to a typicality amount of time a transitional “quench” would take (e.g., less than 10 seconds). As mentioned above, the rate of energy exchange change (and thus the rate of magnetic field change) can be controlled so that the temperature of the conductor does not exceed a predetermined threshold that could potentially cause irreversible damage. In an embodiment, a temperature monitor may be used to measure a temperature of the magnet coils in real-time. The temperature of the magnet coils may be monitored and the temperature may be provided to a controller of the MRI system to control the raid shutdown of the magnetic field14 At block204, the energy dissipated from the magnet coils is stored in the energy storage device. The energy storage device may be, for example, an inductive load or a battery. After the magnetic field has been turned off, the status of the rapid shutdown condition is determined at block206. If the rapid shutdown condition has not been resolved at block208, the magnetic field will remain turned off until the issue is resolved. If the rapid shutdown condition has been resolved at block208, the magnet coils of the magnet assembly may be recharged using the energy stored in the energy storage device at block210. In an embodiment, a user may provide instructions to the MRI system to recharge of the magnet coils of the magnet assembly. In another embodiment, a rapid shutdown (e.g., an emergency shutdown) of the magnet coils32may be performed using a resistive load coupled to the magnet coils32.FIG.3is a block diagram of an MRI system capable of rapid shutdown of a superconducting magnet. The elements and operation of MRI system10shown inFIG.3are similar to the MRI system described above with respect toFIG.1. InFIG.3, the MRI system10includes a resistive load48is coupled to magnet coils32of a magnet assembly12. In an embodiment, the resistive load48has a large thermal mass. The resistive load48may be coupled to the magnet coils32using a superconducting switch52. The superconducting switch52may be controlled using, for example, controller26to selectively connect the resistive load48and the magnet coils32into a connected circuit. In an embodiment, the superconducting switch52may be any suitable superconducting switch that can be used for selectively connecting the magnet coils32and resistive load48into a connected circuit. For example, the superconducting switch52may be switched between an open state and a closed state as described in the non-limiting example mentioned above. Energy from the magnet coils32may be dissipated to the resistive load48during rapid shutdown of the magnetic field14. In an embodiment, the magnetic field14may be turned off in a short amount of time, for example, in an amount of time comparable to a typicality amount of time a transitional “quench” would take (e.g., less than 10 seconds). As mentioned above, the rate of energy exchange change (and thus the rate of magnetic field change) can be controlled so that the temperature of the conductor (magnet coils32) does not exceed a predetermined threshold that could potentially cause irreversible damage. In an embodiment, a temperature monitor44may be used to measure a temperature of the magnet coils32in real-time. The temperature of the magnet coils32may be monitored and the temperature may be provided to a controller26to control the rapid shutdown of the magnetic field14. The controller26may be configured to rapidly shutdown (or turn off) the magnet field14of the magnet assembly12in response to instructions from a user. The user may provide instructions to the controller based on the presence of a shutdown condition. In yet another embodiment, a resistive load may be used in combination with an energy storage device to rapidly shutdown and recharge the magnet coils as shown inFIG.4. The elements and operation of MRI system10shown inFIG.4are similar to the MRI system described above with respect toFIG.1. InFIG.4, the MRI system includes a resistive load48is coupled to magnet coils32of a magnet assembly12and the resistive load48is also coupled to an energy storage device46. The energy storage device46is coupled to a controller26. In one embodiment, the energy storage device may be an inductive load. For example, the inductive load may be a second superconducting system. The second superconducting system may be thermally coupled to the cryocooler36of MRI system10and cooled by the cryocooler36. In another embodiment, the energy storage device46may be a battery. The energy storage device46may be coupled to the magnet coils32using a superconducting switch50and the resistive load48may be coupled to the magnet coils32using a superconducting switch52. The superconducting switches50,52may be controlled using, for example, controller26to selectively connect the energy storage device46and the resistive load48, respectively, and the magnet coils32into a connected circuit. In an embodiment, the superconducting switches50,52may be any suitable superconducting switch that can be used for selectively connecting the magnet coils32and resistive load48into a connected circuit. For example, the superconducting switches50,52may be switched between an open state and a closed state as described in the non-limiting example mentioned above. Energy from the magnet coils32may be dissipated to the resistive load48during rapid shutdown of the magnetic field14. The controller26may be configured to rapidly shutdown (or turn off) the magnet field14of the magnet assembly12in response to instructions from a user. The user may provide instructions to the controller26based on the presence of a shutdown condition. Thermal energy (or heat) dissipated by the resistive load48may be used to charge the energy storage device46. As mentioned above, the rate of energy exchange change (and thus the rate of magnetic field change) can be controlled so that the temperature of the conductor (magnet coils32) does not exceed a predetermined threshold that could potentially cause irreversible damage. In an embodiment, a temperature monitor44may be used to measure a temperature of the magnet coils32in real-time. The temperature of the magnet coils32may be monitored and the temperature may be provided to a controller26to control the raid shutdown of the magnetic field14. After the magnetic field14has been shut down (or turned off), the condition(s) that led to the need for the rapid shutdown may be resolved. Once the rapid shutdown condition has been resolved, the energy stored in the energy storage device46from the resistive load48may be used to fully, or partially, recharge the magnet coils32. The controller26may be configured to recharge the magnet coils32using the energy stored in the energy storage device46in response to instructions from a user. For example, the energy storage device46and the superconducting switch50may operate under control from the controller26to provide the energy stored in the energy storage device46to the magnet coils32when the energy storage device46is in a connected circuit with the magnet coils32. The present invention has been described in terms of one or more preferred embodiments, and it should be appreciated that many equivalents, alternatives, variations, and modifications, aside from those expressly stated, are possible and within the scope of the invention.
28,492
11860255
DETAILED DESCRIPTION Conventional MRI systems typically consume large amounts of power during their operation. For example, common 1.5 T and 3 T MRI systems typically consume between 20-40 kW of power during operation, while available 0.5 T and 0.2 T MRI systems commonly consume between 5-20 kW, each using dedicated and specialized power sources. Unless otherwise specified, power consumption is referenced as average power consumed over an interval of interest. For example, the 20-40 kW referred to above indicates the average power consumed by conventional MRI systems during the course of image acquisition, which may include relatively short periods of peak power consumption that significantly exceeds the average power consumption (e.g., when the gradient coils and/or radio frequency (RF) coils are pulsed over relatively short periods of the pulse sequence). As discussed above, available clinical MRI systems must have dedicated power sources, typically requiring a dedicated three-phase connection to the electrical grid to power the components of the MRI system in order to satisfy the peak and average power consumption during operation of the MRI system. This requirement severely limits the ability to deploy conventional clinical MRI systems in environments where such power cannot be readily supplied, restricting the clinical applications and locations where MRI can be utilized. The inventors have recognized and appreciated that portable and/or low-field MRI systems utilizing power supplied through single-phase mains electricity also demand high peak power consumption for short periods of time during operation (e.g., to produce some gradient fields and/or RF pulses during a pulse sequence). For example, in some embodiments, while average power consumption of the MRI system may be approximately under 1500 W, for the production of some gradient fields, the MRI system may use between 2000 and 3000 W, and up to 4000 W, for a period of 100 ms. This peak power consumption may be repeated every second or two throughout operation of the MRI system. Such peak power may exceed the power that is available to the MRI system solely from mains electricity. Alternatively, when such peak power can be supplied from mains electrically, the MRI system's consumption of short bursts of high peak power could detrimentally affect the electrical system supplying the power to the MRI system. For example, if the MRI system pulls too much peak power over a short period of time, a breaker at the medical facility could be tripped during operation of the MRI system, causing undesirable loss of power at the medical facility. The inventors have recognized and appreciated that an additional energy storage device can supplement available power provided to the MRI system by mains electricity during load peaks. Additionally, the inventors have recognized that such an energy device can provide load-leveling to the MRI system by absorbing excess power from the MRI system's mains-connected power supply (PSU) during load dips. In this manner, the MRI system can be operated without affecting the supply of mains electricity during load peaks and dips. Accordingly, the inventors have developed systems and methods for supplying power to an MRI system from a power supply configured to receive mains electricity and supplying supplemental power to the MRI system from an energy storage device. In some embodiments, the MRI system is configured to operate in accordance with a pulse sequence having multiple periods and includes a magnetics system, a power system, and at least one controller. The magnetics system includes a B0magnet to generate at least part of (e.g., less than all of or all of) the main B0magnetic field and a gradient coil to generate at least one gradient magnetic field to provide spatial encoding of magnetic resonance (MR) signals from the subject (e.g., along the x-, y-, and/or z-axes). The power system is configured to provide power to at least some of the components of the magnetics system and includes an energy storage device and a power supply. A controller is configured to control the MRI system to operate in accordance with the pulse sequence at least in part by generating, by using power supplied by the power supply and supplemental power supplied by the energy storage device, at least one gradient field using the at least one gradient coil. The energy storage device may be, for example, one or more batteries of any suitable chemistry, one or more capacitors, one or more supercapacitors, one or more ultracapacitors, one or more flywheels, one or more compressed fluid devices, and/or one or more pumped storage devices. It should be appreciated that the energy storage device may include a single type of energy storage device (e.g., only batteries, only capacitors, only supercapacitors, etc.) or may include any suitable combination of the above-described devices, as aspects of the technology described herein are not so limited. The power system also includes a power supply configured to receive mains electricity. Mains electricity is electricity typically provided at standard wall outlets. Mains electricity may be single-phase electricity or may be multi-phase electricity (e.g., three-phase electricity). For example, in the United States, mains electricity may be provided at a voltage of 120 V or 240 V and rated at 15, 20, or 30 amperes. Globally, mains electricity may be provided at a voltage between 100 V and 130 V (e.g., at 100 V, 110 V, 115 V, 120 V, or 127 V) or between 200 V and 240 V (e.g., at 220 V, 230 V, or 240 V) and rated at an amperage between 2.5 and 32 A. The power supply is further configured to provide power to the MRI system using the received mains electricity. For example, the power system may be an AC-to-DC power supply, in some embodiments. In some embodiments, the MRI system may be operated using power supplied by the power supply and supplemental power supplied by the energy storage device. The power supplied by the power supply and the supplemental power supplied by the energy storage device may be jointly (e.g., concurrently, at the same time) supplied to the MRI system. For example, the energy storage device and the power supply may concurrently supply power to the MRI system for periods of time within a pulse sequence (e.g., during particular gradient and/or radio frequency pulse application periods of time). Alternatively, the power supplied by the power supply and the supplemental power supplied by the energy storage device may be nonconcurrently supplied to the MRI system (e.g., at separate times). For example, the energy storage device and the power supply may nonconcurrently supply power to the MRI system for different periods of time within a pulse sequence (e.g., for different gradient and/or radio frequency pulse application periods of time, for different portions of gradient and/or radio frequency pulse application periods of time). It should be appreciated that the supplemental power supplied by the energy storage device may provide a minority of the power used by the MRI system, approximately half of the power used by the MRI system, a majority of the power used by the MRI system, and/or all of the power used by the MRI system, as aspects of the technology described herein are not limited in this respect. In some embodiments, the energy storage device may be electrically coupled to the MRI system (e.g., the magnetic components, other electronic components, etc.) using a unidirectional DC-to-DC power converter. In some embodiments, the energy storage device may be electrically coupled to the MRI system using a bidirectional DC-to-DC power converter. For example, the bidirectional DC-to-DC power converter may be arranged as a synchronous buck DC-to-DC power converter, a synchronous boost DC-to-DC power converter, or a four switch buck-boost DC-to-DC power converter. In some embodiments, the power supply may be configured to provide power to the energy storage device and the MRI system concurrently. For example, the power supply may be configured to charge the energy storage device while also powering the MRI system. In some embodiments, the energy storage device may be charged by the power supply during operation of the MRI system (e.g., during a pulse sequence) or while the MRI system is in an idle state. In some embodiments, the energy storage device and the power supply may be both physically coupled to the MRI system. In some embodiments, the energy storage device and the power supply may be both disposed on-board the MRI system. For example, in the instance of a portable MRI system that may be moved between locations, the energy storage device and the power supply may be disposed in such a way that both move with MRI system between locations. In some embodiments, the energy storage device and the power supply may be configured to jointly provide power to the MRI system when the MRI system is operating in accordance with a particular pulse sequence. For example, the energy storage device and the power supply may be configured to jointly provide power to the MRI system when the MRI system is operating in accordance with a diffusion-weighted imaging (DWI) pulse sequence. Alternatively, the MRI system may be operated in accordance with any one of a non-limiting selection of a steady-state free precession (SSFP) pulse sequence, a balanced SSFP pulse sequence, a fluid-attenuated inversion recovery (FLAIR) pulse sequence, and/or a fast spin echo pulse sequence. In some embodiments, the power supply may be configured to provide power to the MRI system and the energy storage device may be configured to provide supplemental power to the MRI system at least once per period of the pulse sequence (e.g., for a single pulse or multiple pulses during the pulse sequence). For example, the power supply may be configured to provide power to the MRI system and the energy storage device may be configured to provide supplemental power to the MRI system to power a gradient coil in order to generate at least one gradient field at least once per period of the pulse sequence. In some embodiments, the power supply may be configured to provide power to the MRI system and the energy storage device may be configured to provide supplemental power to the MRI system during diffusion gradient pulses of a DWI pulse sequence. In some embodiments, the energy storage device and the power supply may be configured to provide a peak power to the MRI system that is greater than or equal to an average power used by the MRI system. For example, the energy storage device and the power supply may be configured to jointly provide a peak power that is greater than or equal to 1500 W. In some embodiments, the energy storage device and the power supply may be configured to jointly provide a peak power that is less than or equal to 4000 W. In some embodiments, the energy storage device and the power supply may be configured to jointly provide a peak power that is greater than or equal to 1500 W and less than or equal to 3500 W, greater than or equal to 1500 W and less than or equal to 3000 W, or greater than or equal to 2000 W and less than or equal to 4000 W. It may be appreciated that the energy storage device and the power supply may be configured to provide any suitable peak power or range of peak powers within the aforementioned range. In some embodiments, the energy storage device and the power supply may be configured to provide a peak power for a length of time that is greater than or equal to 1 ms and less than or equal to 200 ms, greater than or equal to 1 ms and less than or equal to 150 ms, greater than or equal to 5 ms and less than or equal to 150 ms, or greater than or equal to 10 ms and less than or equal to 100 ms. In some embodiments, the MRI system may also include a conveyance mechanism allowing the MRI system to be transported to different locations. For example, the conveyance mechanism may be a motorized drive system, in some embodiments. The MRI system may also include a transfer switch configured to couple the energy storage device to the mobile MRI drive system or to the magnetics system of the MRI system. In this way, the energy storage device may be used to power the conveyance mechanism while the MRI system is moving between locations and not connected to mains electricity (e.g., via a wall outlet). In some embodiments, the conveyance mechanism may include at least one motorized component. In some embodiments, the conveyance mechanism may include at least one wheel. For example, the at least one wheel may be at least one motorized wheel. In some embodiments, the at least one B0magnet is configured to generate a B0magnetic field having a field strength of less than or equal to 0.2 T. In some embodiments, the at least one B0magnet is configured to generate a B0magnetic field having a field strength of less than or equal to 0.2 T and greater than or equal to 50 mT, a field strength of less than or equal to 0.1 T and greater than or equal to 50 mT, a field strength of less than or equal to 0.1 T and greater than or equal to 10 mT, a field strength of less than or equal to 0.1 T and greater than or equal to 20 mT, a field strength of less than or equal to 0.1 T and greater than or equal to 0.05 mT, a field strength of less than or equal to 0.2 T or greater than or equal to 20 mT, or field strength within any suitable range within these ranges. It should be appreciated that while the techniques described herein are described primarily in connection with an MRI system, they could be employed in other similar medical imaging devices requiring large peak power during operation, such as X-ray scanners and/or CT scanners. Following below are more detailed descriptions of various concepts related to, and embodiments of, techniques for load-leveling of a medical imaging system. It should be appreciated that various aspects described herein may be implemented in any of numerous ways. Examples of specific implementations are provided herein for illustrative purposes only. In addition, the various aspects described in the embodiments below may be used alone or in any combination and are not limited to the combinations described explicitly herein. As used herein, “high-field” refers generally to MRI systems presently in use in a clinical setting and, more particularly, to MRI systems operating with a main magnetic field (i.e., a B0field) at or above 1.5 T, though clinical systems operating between 0.5 T and 1.5 T are often also characterized as “high-field.” Field strengths between 0.2 T and 0.5 T have been characterized as “mid-field” and, as field strengths in the high-field regime have continued to increase, field strengths in the range between 0.5 T and 1 T have also been characterized as mid-field. By contrast, “low-field” refers generally to MRI systems operating with a B0field of less than or equal to 0.2 T. For example, a low-field MRI system may operate with a B0field having a field strength of less than or equal to 0.2 T and greater than or equal to 50 mT, having a field strength of less than or equal to 0.1 T and greater than or equal to 50 mT, a field strength of less than or equal to 0.1 T and greater than or equal to 10 mT, a field strength of less than or equal to 0.1 T and greater than or equal to 20 mT, a field strength of less than or equal to 0.1 T and greater than or equal to 0.05 mT, a field strength of less than or equal to 0.2 T or greater than or equal to 20 mT, or field strength within any suitable range within these ranges. FIG.1illustrates exemplary components of a magnetic resonance imaging (MRI) system, in accordance with some embodiments. In the illustrative example ofFIG.1, MRI system100comprises computing device104, controller106, pulse sequences repository108, power management system110, and magnetics components120. It should be appreciated that system100is illustrative and that an MRI system may have one or more other components of any suitable type in addition to or instead of the components illustrated inFIG.1. However, an MRI system will generally include these high-level components, though the implementation of these components for a particular MRI system may differ. It may be appreciated that the techniques described herein for detecting patient motion may be used with any suitable type of MRI systems including high-field MRI systems, low-field MRI systems, and ultra-low field MRI systems. For example, the techniques described herein may be used with any of the MRI systems described herein and/or as described in U.S. Pat. No. 10,627,464 filed Jun. 30, 2017 and titled “Low-Field Magnetic Resonance Imaging Methods and Apparatus,” which is incorporated by reference herein in its entirety. As illustrated inFIG.1, magnetics components120comprise B0magnets122, shim coils124, RF transmit and receive coils126, and gradient coils128. B0magnets122may be used to generate the main magnetic field B0. B0magnets122may be any suitable type or combination of magnetics components that can generate a desired main magnetic B0field. In some embodiments, B0magnets122may be one or more permanent magnets, one or more electromagnets, one or more superconducting magnets, or a hybrid magnet comprising one or more permanent magnets and one or more electromagnets and/or one or more superconducting magnets. In some embodiments, B0magnets122may be configured to generate a B0magnetic field having a field strength that is less than or equal to 0.2 T, a field strength of less than or equal to 0.2 T and greater than or equal to 50 mT, a field strength of less than or equal to 0.1 T and greater than or equal to 50 mT, a field strength of less than or equal to 0.1 T and greater than or equal to 10 mT, a field strength of less than or equal to 0.1 T and greater than or equal to 20 mT, a field strength of less than or equal to 0.1 T and greater than or equal to 0.05 mT, a field strength of less than or equal to 0.2 T or greater than or equal to 20 mT, or field strength within any suitable range within these ranges. For example, in some embodiments, B0magnets122may include a first and second B0magnet, each of the first and second B0magnet including permanent magnet blocks arranged in concentric rings about a common center. The first and second B0magnet may be arranged in a bi-planar configuration such that the imaging region is located between the first and second B0magnets. In some embodiments, the first and second B0magnets may each be coupled to and supported by a ferromagnetic yoke configured to capture and direct magnetic flux from the first and second B0magnets. As an example, B0magnets122may include an upper magnet810aand a lower magnet810bas described in the embodiment shown inFIGS.8A and8Bherein. Each magnet810a,810bincludes permanent magnet blocks arranged in concentric rings about a common center, and the upper magnet810aand lower magnet810bare arranged in a bi-planar configuration and supported by ferromagnetic yoke820. Additional details of such embodiments are described in U.S. Pat. No. 10,545,207 titled “Low-Field magnetic Resonance Imaging Methods and Apparatus” filed on Apr. 18, 2018, which is incorporated by reference herein in its entirety. Gradient coils128may be arranged to provide gradient fields and, for example, may be arranged to generate gradients in the B0field in three substantially orthogonal directions (X, Y, Z). Gradient coils128may be configured to encode emitted MR signals by systematically varying the B0field (the B0field generated by B0magnets122and/or shim coils124) to encode the spatial location of received MR signals as a function of frequency or phase. For example, gradient coils128may be configured to vary frequency or phase as a linear function of spatial location along a particular direction, although more complex spatial encoding profiles may also be provided by using nonlinear gradient coils. In some embodiments, gradient coils128may be implemented using laminate panels (e.g., printed circuit boards). Examples of such gradient coils are described in U.S. Pat. No. 9,817,093 titled “Low Field Magnetic Resonance Imaging Methods and Apparatus” filed on Sep. 4, 2015, which is incorporated by reference herein in its entirety. MRI is performed by exciting and detecting emitted MR signals using transmit and receive coils, respectively (often referred to as radio frequency (RF) coils). Transmit/receive coils may include separate coils for transmitting and receiving, multiple coils for transmitting and/or receiving, or the same coils for transmitting and receiving. Thus, a transmit/receive component may include one or more coils for transmitting, one or more coils for receiving and/or one or more coils for transmitting and receiving. Transmit/receive coils are also often referred to as Tx/Rx or Tx/Rx coils to generically refer to the various configurations for the transmit and receive magnetics component of an MRI system. These terms are used interchangeably herein. InFIG.1, RF transmit and receive circuitry116comprises one or more transmit coils that may be used to generate RF pulses to induce an oscillating magnetic field Bi. The transmit coil(s) may be configured to generate any suitable types of RF pulses. The transmit and receive circuitry116may include additional electronic components of the transmit and receive chains, as described in U.S. Patent Application Publication No. 2019/0353723 titled “Radio-Frequency Coil Signal Chain for a Low-Field MRI System” and filed on May 21, 2019, which is hereby incorporated by reference in its entirety. Power management system110includes electronics to provide operating power to one or more components of the low-field MRI system100. For example, power management system110may include one or more power supplies, energy storage devices, gradient power components, transmit coil components, and/or any other suitable power electronics needed to provide suitable operating power to energize and operate components of MRI system100. As illustrated inFIG.1, power management system110comprises power supply system112, power component(s)114, transmit/receive switch116, and thermal management components118(e.g., cryogenic cooling equipment for superconducting magnets). Power supply system112includes electronics to provide operating power to magnetic components120of the MRI system100. The electronics of power supply system112may provide, for example, operating power to one or more gradient coils (e.g., gradient coils128) to generate one or more gradient magnetic fields to provide spatial encoding of the MR signals. For example, power supply system112may include a power supply112aconfigured to provide power from mains electricity to the MRI system and an energy storage device112b, as described in more detail in connection withFIGS.2A and2B. The power supply112amay, in some embodiments, be an AC-to-DC power supply configured to convert AC power from mains electricity into DC power for use by the MRI system. The energy storage device112bmay, in some embodiments, be any one of a battery, a capacitor, a supercapacitor, an ultracapacitor, a flywheel, or any other suitable energy storage apparatus that may bidirectionally receive (e.g., store) power from mains electricity and supply power to the MRI system. Additionally, power supply system112may include power electronics112cencompassing components including, but not limited to, power converters, switches, buses, drivers, and any other suitable electronics for supplying the MRI system with power. In some embodiments, the power supply system112may be configured to receive operating power from mains electricity via a power connection to, for example, a standard wall outlet (e.g., 120V/20 A connections in the United States, 100-130V/200-240V connections internationally) or common large appliance outlets (e.g., 220-240V/30 A), allowing the device to be operated anywhere common power outlets are provided. For example, mains electrical power in the United States and most of North America is provided at 120V and 60 Hz and rated at 15 or 20 amps, permitting utilization for devices operating below 1800 and 2400 W, respectively. Many facilities also have 220-240 VAC outlets with 30 amp ratings, permitting devices operating up to 7200 W to be powered from such outlets. The ability to “plug into the wall” facilitates both portable/transportable MRI as well as fixed MRI system installations without requiring special, dedicated power such as a three-phase power connection. Amplifiers(s)114may include one or more RF receive (Rx) pre-amplifiers that amplify MR signals detected by one or more RF receive coils (e.g., coils126), one or more RF transmit (Tx) power components configured to provide power to one or more RF transmit coils (e.g., coils126), one or more gradient power components configured to provide power to one or more gradient coils (e.g., gradient coils128), and one or more shim power components configured to provide power to one or more shim coils (e.g., shim coils124). Transmit/receive switch116may be used to select whether RF transmit coils or RF receive coils are being operated. As illustrated inFIG.1, MRI system100includes controller106(also referred to as a console) having control electronics to send instructions to and receive information from power management system110. Controller106may be configured to implement one or more pulse sequences, which are used to determine the instructions sent to power management system110to operate the magnetic components120in a desired sequence (e.g., parameters for operating the RF transmit and receive coils126, parameters for operating gradient coils128, etc.). As illustrated inFIG.1, controller106also interacts with computing device104programmed to process received MR data. For example, computing device104may process received MR data to generate one or more MR images using any suitable image reconstruction process(es). Controller106may provide information about one or more pulse sequences to computing device104for the processing of data by the computing device. For example, controller106may provide information about one or more pulse sequences to computing device104and the computing device may perform an image reconstruction process based, at least in part, on the provided information. Computing device104may be any electronic device configured to process acquired MR data and generate one or more images of a subject being imaged. In some embodiments, computing device104may be located in a same room as the MRI system100and/or coupled to the MRI system100. In some embodiments, computing device104may be a fixed electronic device such as a desktop computer, a server, a rack-mounted computer, or any other suitable fixed electronic device that may be configured to process MR data and generate one or more images of the subject being imaged. Alternatively, computing device104may be a portable device such as a smart phone, a personal digital assistant, a laptop computer, a tablet computer, or any other portable device that may be configured to process MR data and generate one or images of the subject being imaged. In some embodiments, computing device104may comprise multiple computing devices of any suitable type, as aspects of the disclosure provided herein are not limited in this respect. FIG.2Aillustrates a block diagram of an exemplary power system200afor an MRI system, in accordance with some embodiments. Power system200amay be included in power supply system112as described in connection withFIG.1. Power system200aincludes an energy storage device202and an AC-to-DC power supply206electrically coupled to the MRI system electronics210(e.g., to one or more components of magnetics system120(e.g., B0magnet(s)122, shim coils124, RF transmit and receive coils126, and/or gradient coils128) and/or to any other MRI system electronics to be powered during operation) through DC bus204. The AC-to-DC power supply206receives mains electricity (e.g., single-phase electricity) from AC mains208(e.g., a wall outlet). In some embodiments, the energy storage device202may comprise a physical system configured to store energy and exchange energy in both directions with an electrical circuit. The energy storage device202may include, for example, one or more batteries of any suitable chemistry. For example, the energy storage device202may be lead-acid batteries, nickel-cadmium batteries, nickel-metal hydride batteries, and/or lithium ion batteries. Alternatively or additionally, the energy storage device202may include one or more capacitors, supercapacitors, or ultracapacitors (e.g., comprising a capacitance greater than or equal to 0.5 F). Alternatively or additionally, the energy storage device202may include any other suitable energy storage mechanism, including but not limited to a flywheel, compressed fluids, and/or pumped storage. In some embodiments, the AC-to-DC power supply206may convert AC mains electricity to DC power to supply the MRI system electronics210with DC power through the DC bus204. The AC-to-DC power supply206may comprise a transformer and a rectifier. The AC-to-DC power supply206may include any other suitable components. For example, the AC-to-DC power supply206may include additional filtering components based on requirements of the MRI system electronics210to filter AC noise out of the DC signal. In some embodiments, the AC-to-DC power supply206may be configured to provide power to MRI system electronics210and the energy storage device202may be configured to provide supplemental power to MRI system electronics210during operation of the MRI system. For example, the AC-to-DC power supply206may be configured to provide power to MRI system electronics210and the energy storage device202may be configured to provide supplemental power to MRI system electronics210when the MRI system electronics210are operated in accordance with a pulse sequence having multiple periods in order to acquire a magnetic resonance (MR) image. The AC-to-DC power supply206may be configured to provide power to MRI system electronics210and the energy storage device202may be configured to provide supplemental power to MRI system electronics210for portions of or the entirety of a period of the pulse sequence. In some embodiments, the AC-to-DC power supply206may be configured to provide power to MRI system electronics210and the energy storage device202may be configured to provide supplemental power to MRI system electronics210during operation of the MRI system in accordance with a diffusion-weighted imaging (DWI) pulse sequence. DWI pulse sequences use strong diffusion gradient fields to sensitize diffusing spins (e.g., in the blood stream, in cerebrospinal fluid, in tumors, etc.) and to generate MR images based on the diffusion of the sensitized spins. In some embodiments, the AC-to-DC power supply206may be configured to provide power to MRI system electronics210and the energy storage device202may be configured to provide supplemental power to MRI system electronics210when diffusion gradient fields are being generated by the gradient coils of the MRI system during a DWI pulse sequence. In some embodiments, the energy storage device202and the AC-to-DC power supply206may be configured to provide a peak power having an amplitude greater than an average power consumption of the MRI system. For example, the energy storage device202and the AC-to-DC power supply206may be configured to provide a peak power having an amplitude greater than or equal to 1500 W. In some embodiments, the energy storage device202and the AC-to-DC power supply206may be configured to provide a peak power having an amplitude less than or equal to 4000 W. In some embodiments, the energy storage device and the power supply may be configured to provide a peak power that is greater than or equal to 1500 W and less than or equal to 3500 W, greater than or equal to 1500 W and less than or equal to 3000 W, or greater than or equal to 2000 W and less than or equal to 4000 W. It may be appreciated that in some embodiments, the energy storage device202and the AC-to-DC power supply206may be configured to provide any suitable peak power having an amplitude within the above-specified range or a range of peak powers within that range. In some embodiments, the energy storage device202and the AC-to-DC power supply206may be configured to function as an uninterruptible power supply (UPS). For example, in the event that the AC mains electricity is interrupted (e.g., a blackout or a brownout), the energy storage device202may be configured to supply additional power to the MRI system electronics210in order to maintain a steady power supply to the MRI system electronics210. In particular, such a configuration would be useful in settings where the electrical infrastructure is unreliable (e.g., field hospitals, the developing world). In some embodiments, the energy storage device202and the AC-to-DC power supply206may both be physically coupled to the MRI system. For example, the energy storage device202and the AC-to-DC power supply206may both be “on-board” the MRI system such that if the MRI system is moved between locations, both the energy storage device202and the AC-to-DC power supply206are moved with the MRI system. Additional configurations of energy storage device202and AC-to-DC power supply206are presented herein.FIG.2Billustrates a block diagram of an exemplary power system200bfor an MRI system, in accordance with some embodiments. The energy storage device202may be coupled to the DC bus204through a bidirectional DC-to-DC power converter203. In some embodiments, the bidirectional DC-to-DC power converter203may comprise a synchronous buck DC-to-DC power converter, a synchronous boost DC-to-DC power converter, or a four switch buck-boost DC-to-DC power converter, as described in more detail in connection withFIGS.3A-3C. In some embodiments, the bidirectional DC-to-DC power converter203may comprise a switch-mode power supply (SMPS). In some embodiments, the bidirectional DC-to-DC power converter203may switch between buck and boost modes based on the voltage of the DC bus204. For example, at the start of a pulse sequence and when a load on the AC-to-DC power supply206is light, the DC bus204may maintain its nominal output voltage, VBUS. At this stage, the bidirectional DC-to-DC power converter203operates in buck mode and acts as a float charger for the energy storage device202. As the pulse sequence progresses and the load on the AC-to-DC power supply206exceeds the current limit of the AC-to-DC power supply206, the value of VBUSbegins to decrease. When VBUSfalls below a threshold voltage value, the bidirectional DC-to-DC power converter203is switched into boost mode (e.g., by controller106, by MRI system electronics210, etc.), thereby causing the energy storage device202to provide current to the AC-to-DC power supply206and to regulate the value of VBUSWhen the excess load on the AC-to-DC power supply206decreases and VBUSbegins to rise (e.g., after a large gradient pulse is completed), the bidirectional DC-to-DC power converter203will be switched back to operating in buck mode once VBUShas remained above a threshold voltage level for a specified length of time (e.g., 50 μs). In some embodiments, boost mode may also be automatically terminated after a specified length of time (e.g., 200 ms) or if the voltage of the energy storage device202falls below a threshold value. In some embodiments, coupling the energy storage device202to the DC bus204through the bidirectional DC-to-DC power converter203may allow the energy storage device202to exchange energy with the DC bus204while allowing the energy storage device202and the DC bus204to maintain arbitrary and different DC voltages. For example, if the energy storage device202comprises a 1 F capacitor rated up to 60V (e.g., it can store 1800 J of energy) and is directly connected to the DC bus204, which is maintained at 48V, the energy storage device202may store only up to 1152 J of energy. However, if the energy storage device202is coupled to the same DC bus204through the bidirectional DC-to-DC power converter203, the energy storage device202may be charged up to 60V (e.g., its full 1800 J of energy) while still maintaining the DC bus204at a lower 48V nominal level. The implementation of the bidirectional DC-to-DC power converter203may include any combination of component arrangements (e.g., a synchronous boost converter, a synchronous buck converter, and/or a four switch buck-boost converter). Some examples of DC-to-DC power converters that may be used in the power system200bare shown inFIGS.3A-3Cand described below. FIG.3Aillustrates a block diagram of an exemplary DC-to-DC power converter300a, in accordance with some embodiments. The energy storage device202may be configured to have an operating voltage that is lower than the nominal voltage of the DC bus204. For example, the energy storage device202may be a 24V rechargeable battery while the DC bus204may be maintained at 48V. The DC-to-DC power converter300amay be configured to function as a step-down converter when delivering power to the energy storage device202(e.g., to charge the energy storage device202) and to function as a step-up converter when extracting power from the energy storage device202to deliver it to the DC bus204. The DC-to-DC power converter300amay include V/I monitors304to monitor the voltage and/or current flow from energy storage device202and the DC bus204and to determine the direction and magnitude of power flow at any given time, in some embodiments. A controller305may receive information indicative of a voltage and/or current flow into or out of the energy storage device202and/or the DC bus204from V/I monitors304. The controller305may also receive information from another controller (e.g., controller106ofFIG.1) including instructions to change current direction and/or amplitude. In some embodiments, the controller may include, for example, a microcontroller. The DC-to-DC power converter300amay include an inductor306coupled between the source and drain of transistor switches308and the energy storage device202, in some embodiments. In some embodiments, the controller305may send instructions to drivers310to enable or disable transistor switches308, allowing current to flow to or from the energy storage device202through inductor306. FIG.3Billustrates a block diagram of another exemplary DC-to-DC power converter300b, in accordance with some embodiments. The energy storage device202may be configured to have an operating voltage that is higher than the nominal voltage of the DC bus204. For example, the energy storage device202may be a capacitor (e.g., a 600V film capacitor) while the DC bus204may be maintained at 48V. The DC-to-DC power converter300bmay be configured to function as a step-down converter when extracting power from the energy storage device202to deliver it to the DC bus204and to function as a step-up converter when delivering power to the energy storage device202(e.g., to charge the energy storage device202). The DC-to-DC power converter300bmay include same or similar components as the DC-to-DC power converter300abut may couple an output of the energy storage device202to a transistor switch308rather than the inductor306, in some embodiments. The inductor306may be coupled between the source and drain of the transistor switches308and the DC bus204. In some embodiments, the controller305may send instructions to drivers310to enable or disable transistor switches308, allowing current to flow to or from the energy storage device202through inductor306. FIG.3Cillustrates a block diagram of another exemplary DC-to-DC power converter300c, in accordance with some embodiments. The energy storage device202may be configured to vary above and below the nominal operating voltage of the DC bus204. For example, the energy storage device202may comprise an array of supercapacitors (e.g., an array providing a total voltage of 18V and capacitance of62F). In such embodiments, DC-to-DC power converter300cmay function as either a step-up or step-down converter when transferring power in either direction between the energy storage device202and the DC bus204. The DC-to-DC power converter300cmay include two pairs of transistor switches308aand308band two drivers310aand310b, respectively, to control the states of the transistor switches308aand308b, in some embodiments. The controller305may send instructions to both drivers310aand310bin order to change the states of the transistor switches308aand308bin order to, for example, change direction of current flow between the energy storage device202and the DC bus204. An inductor306may be coupled between the pairs of transistor switches308aand308bsuch that it is coupled between a source and a drain of both pairs of transistor switches308aand308b. FIG.4is an illustrative block diagram of a power system400for an MRI system including unidirectional DC-to-DC power converters404and406, in accordance with some embodiments. The energy storage device202may be coupled to the DC Bus204through transfer switches403and first DC-to-DC power converter404or second DC-to-DC power converter406. For example, first DC-to-DC power converter404may be configured to transfer power from the energy storage device202to the DC bus204while second DC-to-DC power converter406may be configured to transfer power from the DC bus204to the energy storage device202. In some embodiments, transfer switches403may be configured to couple the energy storage device202to the DC Bus204either through the DC-to-DC power converter404or the DC-to-DC power converter406depending on the desired direction of power transfer between the energy storage device202and the DC bus204. As shown in the example ofFIG.4, the MRI system electronics210may be communicatively coupled to the transfer switches403and may control and/or send information indicative of desired settings of the transfer switches403based on the desired direction of power transfer between the energy storage device202and the DC bus204. For example, the transfer switches may be electronic relay switches that may be operated using an electrical signal. In some embodiments, the transfer switches may be manual switches that may be switched by, for example, a user of the MRI system. In some embodiments, the energy storage device202may additionally be configured to power a conveyance mechanism to enable portability of the MRI system. For example, the conveyance mechanism may comprise a motor coupled to one or more drive wheels to provide motorized assistance in transporting the MRI system between locations. Additional aspects of a portable MRI system are described in U.S. Pat. No. 10,222,434, titled “Portable Magnetic Resonance Imaging Methods and Apparatus” and filed on Jan. 24, 2018, which is hereby incorporated by reference in its entirety. FIG.5illustrates a block diagram of an exemplary power system500for a portable MRI system, in accordance with some embodiments. A transfer switch512may couple the energy storage device202to either the DC bus204to power the MRI system electronics210or to the mobile MRI drive system514. The mobile MRI drive system514may include a motorized component configured to assist in moving the MRI system between locations, as described in connection withFIGS.8A and8B. FIG.6illustrates a block diagram of another exemplary power system600for a portable MRI system, in accordance with some embodiments. The AC-to-DC power supply206may power the energy storage device202, and an output of the energy storage device202may be coupled between the DC bus204or the mobile MRI drive system514by a transfer switch512. In such embodiments, the energy storage device202, when coupled to the DC bus204, may be coupled to the DC bus204through a unidirectional DC-to-DC power converter616. FIG.7is a flowchart of an illustrative process700for operating an MRI system, in accordance with some embodiments. Process700may be performed, at least in part, by any suitable computing device(s). For example, process700may be performed by one or more processors that are a part of the MRI system and/or by one or more processors external to the MRI system (e.g., computing devices in an adjoining room, computing devices elsewhere in a medical facility, and/or on the cloud). Process700begins at act702, where a patient may be positioned in the MRI system, in some embodiments. The patient may be positioned so that the portion of the patient's anatomy that is to be imaged is placed within an imaging region of the MRI system. For example, as shown in the example ofFIG.9, the patient's head may be positioned within the imaging region of the MRI system in order to obtain one or more images of the patient's brain. Next, process700proceeds to act704, where a pulse sequence may be selected and accessed. The pulse sequence may be selected based on input from a user of the MRI system that is entered into a controller of the MRI system. For example, the user may input information about the patient (e.g., what portion of the patient's anatomy is positioned within the MRI system, what information the user would like to collect about the patient), and the controller may select an appropriate pulse sequence based on that input. Alternatively or additionally, the user may directly select a desired pulse sequence within a user interface of the controller. For example, the user may select a diffusion weighted imaging (DWI) pulse sequence for imaging of the patient. Alternatively, the user may select any one of a non-limiting selection of a steady-state free precession (SSFP) pulse sequence, a balanced SSFP pulse sequence, a fluid-attenuated inversion recovery (FLAIR) pulse sequence, and/or a fast spin echo (FSE) pulse sequence. In some embodiments, the pulse sequence may be accessed by the controller in order to operate the MRI system in accordance with the pulse sequence. The pulse sequence may be stored electronically (e.g., in at least one computer readable memory, for example, in a text file or in a database). In some embodiments, storing a pulse sequence may comprise storing one or more parameters defining a pulse sequence (e.g., timing sequences, gradient field strengths and directions, radio frequency pulse strengths and/or operating frequencies). It should be appreciated that a pulse sequence may be stored in any suitable way and in any suitable format, as aspects of the technology described herein are not limited in this respect. For example, the pulse sequence may be accessed from pulse sequences108by controller106, as described in connection withFIG.1herein. Process700may then proceed to act706, in which the MRI system may be operated in accordance with the selected pulse sequence, in some embodiments. Act706may include at least two sub-acts,706A and706B. In sub-act706A, the MRI system may obtain power supplied by a power supply configured to receive mains electricity and supplemental power supplied by an energy storage device. For example, the MRI system may obtain power from AC-to-DC power supply206and supplemental power from energy storage device202, as described in connection with the examples ofFIGS.2A-6. In sub-act706B, the MRI system may generate, by using the obtained power and supplemental power, at least one gradient field using an at least one gradient coil. For example, the MRI system may generate a diffusion gradient field during a period of a DWI pulse sequence using the power obtained from the power supply configured to receive mains electricity and the supplemental power obtained from an energy storage device. It may be appreciated that the MRI system may use the obtained power and supplemental power to generate any number or type of gradient field based on the characteristics of the selected pulse sequence. FIGS.8A and8Billustrate views of a portable MRI system in which any power systems as described in connection withFIG.2A,2B,3A,3B,3C,4,5, or6may be implemented, in accordance with some embodiments of the technology described herein. Portable MRI system800comprises a B0magnet810(e.g., B0magnet122as described in connection withFIG.1) formed in part by an upper magnet810aand a lower magnet810bhaving a ferromagnetic yoke820coupled thereto to increase the flux density within the imaging region. The B0magnet810may be housed in magnet housing812along with gradient coils815(e.g., gradient coils128as described in connection withFIG.1herein or any of the gradient coils described in U.S. Pat. No. 9,817,093, titled “Low Field Magnetic Resonance Imaging Methods and Apparatus,” and filed on Sep. 4, 2015, which is herein incorporated by reference in its entirety). According to some embodiments, B0magnet810comprises an electromagnet. According to some embodiments, B0magnet810comprises a permanent magnet. For example, in some embodiments, upper magnet810aand a lower magnet810bmay each include permanent magnet blocks (not shown). The permanent magnet blocks may be arranged in concentric rings about a common center. The upper magnet810aand the lower magnet810bmay be arranged in a bi-planar configuration, as shown in the examples ofFIGS.8A and8B, such that the imaging region is located between the upper magnet810aand the lower magnet810b. In some embodiments, the upper magnet810aand the lower magnet810bmay each be coupled to and supported by a ferromagnetic yoke820configured to capture and direct magnetic flux from the upper magnet810aand the lower magnet810b. In some embodiments, B0magnet810may be configured to generate a B0magnetic field having a field strength that is less than or equal to 0.2 T, a field strength of less than or equal to 0.2 T and greater than or equal to 50 mT, a field strength of less than or equal to 0.1 T and greater than or equal to 50 mT, a field strength of less than or equal to 0.1 T and greater than or equal to 10 mT, a field strength of less than or equal to 0.1 T and greater than or equal to 20 mT, a field strength of less than or equal to 0.1 T and greater than or equal to 0.05 mT, a field strength of less than or equal to 0.2 T or greater than or equal to 20 mT, or field strength within any suitable range within these ranges. Portable MRI system800further comprises a base850housing the electronics needed to operate the MRI system. For example, base850may house power supply system112(including power supply112a, energy storage device112b, and power electronics112c), amplifiers114, and/or transmit and receive circuitry116as described in connection withFIG.1. Such power components may be configured to operate the MRI system (e.g., to operate the gradient coils815in accordance with a pulse sequence) using mains electricity provided to the power supply112a(e.g., via a connection to a standard wall outlet and/or a large appliance outlet) and supplemental power supplied by the energy storage device112b. For example, the power supply system112may include any of the power supply systems200a,200b,300a,300b,300c,400,500, or600as described herein. To facilitate transportation, a motorized component880is provided to allow portable MRI system to be driven from location to location, for example, using a control such as a joystick or other control mechanism provided on or remote from the MRI system. The motorized component880may be powered, in part or in whole, by an energy storage device of the MRI system (e.g., energy storage device202as described in connection withFIGS.5and6). In this manner, portable MRI system800can be transported to the patient and maneuvered to the bedside to perform imaging, as illustrated inFIG.9. For example,FIG.9illustrates a portable MRI system900that has been transported to a patient's bedside to perform a brain scan. In some embodiments, portable MRI system900may be operated to perform a brain scan using power supplied by a power supply connected to mains electricity and supplemental power supplied by an energy storage device as described in connection withFIGS.2A,2B,3A,3B,3C,4,5,6, and7herein. For example, if a DWI pulse sequence is being used to perform the brain scan, supplemental power supplied by the energy storage device may be provided in addition to the power supplied by the power supply during periods of time corresponding to the generation of diffusion gradient pulses of the DWI pulse sequence. Having thus described several aspects of at least one embodiment of this technology, it is to be appreciated that various alterations, modifications, and improvements will readily occur to those skilled in the art. Various aspects of the technology described herein may be used alone, in combination, or in a variety of arrangements not specifically described in the embodiments described in the foregoing and is therefore not limited in its application to the details and arrangement of components set forth in the foregoing description or illustrated in the drawings. For example, aspects described in one embodiment may be combined in any manner with aspects described in other embodiments. Also, the technology described herein may be embodied as a method, examples of which are provided herein including with reference toFIG.7. The acts performed as part of the method may be ordered in any suitable way. Accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts simultaneously, even though shown as sequential acts in illustrative embodiments. Use of ordinal terms such as “first,” “second,” “third,” etc., in the claims to modify a claim element does not by itself connote any priority, precedence, or order of one claim element over another or the temporal order in which acts of a method are performed, but are used merely as labels to distinguish one claim element having a certain name from another element having a same name (but for use of the ordinal term) to distinguish the claim elements. The indefinite articles “a” and “an,” as used herein in the specification and in the claims, unless clearly indicated to the contrary, should be understood to mean “at least one.” The phrase “and/or,” as used herein in the specification and in the claims, should be understood to mean “either or both” of the elements so conjoined, i.e., elements that are conjunctively present in some cases and disjunctively present in other cases. Multiple elements listed with “and/or” should be construed in the same fashion, i.e., “one or more” of the elements so conjoined. Other elements may optionally be present other than the elements specifically identified by the “and/or” clause, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, a reference to “A and/or B”, when used in conjunction with open-ended language such as “comprising” can refer, in one embodiment, to A only (optionally including elements other than B); in another embodiment, to B only (optionally including elements other than A); in yet another embodiment, to both A and B (optionally including other elements); etc. As used herein in the specification and in the claims, the phrase “at least one,” in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements. This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase “at least one” refers, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, “at least one of A and B” (or, equivalently, “at least one of A or B,” or, equivalently “at least one of A and/or B”) can refer, in one embodiment, to at least one, optionally including more than one, A, with no B present (and optionally including elements other than B); in another embodiment, to at least one, optionally including more than one, B, with no A present (and optionally including elements other than A); in yet another embodiment, to at least one, optionally including more than one, A, and at least one, optionally including more than one, B (and optionally including other elements); etc. Also, the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of “including,” “comprising,” or “having,” “containing,” “involving,” and variations thereof herein, is meant to encompass the items listed thereafter and equivalents thereof as well as additional items. The terms “approximately” and “about” may be used to mean within ±20% of a target value in some embodiments, within ±10% of a target value in some embodiments, within ±5% of a target value in some embodiments, within ±2% of a target value in some embodiments. The terms “approximately” and “about” may include the target value.
57,316
11860256
DETAILED DESCRIPTION OF THE EMBODIMENTS Nuclear magnetic resonance occurs when nuclei having an odd number of nucleons, such as hydrogen H1, are exposed simultaneously to a magnetic field sufficiently strong to align the magnetic moment of these nuclei due to spins of the nucleons, and to electromagnetic radiation at a specific magnetic-field-dependent frequency. At this specific frequency, energy may be absorbed from the electromagnetic field or energy may be re-radiated after stimulus with the electromagnetic field. The electromagnetic field is typically in the VHF to UHF radio frequency bands. FIG.2is a functional block diagram of one example magnetic resonance imaging (MRI) system200for forming an MR image259of a subject201. MRI system200includes an MRI scanner290and utilizes a pulse sequence configurable to minimize or reduce phase error in MR image259. MRI scanner290includes a magnet architecture240and a plurality of RF coils248(1, . . . , M). MRI system200may include at least one phased array coil, each including a plurality of coils248. In embodiments, the phased array coil includes between twenty and forty coils, each functioning as a respective channel Subject201lies on a patient table242such that at least a part of subject201is within an imaging volume241that is subdivided into a plurality of voxels243. For clarity of illustration, not all voxels243of imaging volume241are shown inFIG.2. RF coils248function as receiver channels of MR signals generated within the portion of subject201located within imaging volume241. Transverse and sagittal planes intersecting subject201are parallel to the x-y plane and the x-z plane, respectively. Imaging volume241includes a transverse plane244, which is a representative plane corresponding to the plane of MR image160,FIG.1. Transverse plane244intersects a plurality of voxels243that include sources of streak artifacts162. The total volume of voxels243equals that of imaging volume241. RF coils248may be transceivers that function as antennae capable of both (a) transmitting an RF signal to excite protons in subject201and (b) receiving MR signals from excited protons. In embodiments, MRI scanner290includes dedicated transmitter coils for transmitting RF signals, such that RF coils248operate as receiver coils only. In embodiments, each RF coil248functions as a separate receiver channel of MRI system200, and is accordingly and interchangeably referred to as receiver coil248, receiver channel248, or channel248. Each RF coil248may be at least one of a surface coil, a paired saddle coil, a bird cage coil, a Helmholtz coil pair, and other coil types known in the art. MRI system200also includes a data processor205and a pulse programmer224. Pulse programmer224applies a pulse sequence227to RF coils248and magnet architecture240. Pulse programmer224includes pulse sequence parameters225that, at least in part, define pulse sequence227. Pulse sequence parameters225may be stored in memory within pulse programmer224. Pulse programmer224determines an RF signal to be transmitted by RF coils248(or, alternatively, dedicated transmitter coils) according to pulse sequence parameters225. RF coils248transmit this RF signal to imaging volume241so as to excite nuclear magnetic resonances of protons in subject201. The excited protons emit MR signals that are detected by RF coils248. RF coils248may include coils in orthogonal planes such that they make in-phase (real) and quadrature (imaginary) measurements of MR signals. Pulse sequence parameters225relevant to controlling RF coils248may include repetition time between RF pulses emitted by RF coils248. Magnet architecture240produces a magnetic field within volume241. Pulse sequence parameters225include parameters that define gradients of this magnetic field such as echo times. Data processor205receives MR signals detected by RF coils248as MR signals202. Data processor205includes software that reconstructs a plurality coil images260from MR signals202. From either of coil images260, data processor205generates a plurality of streak-suppressed multi-coil images270. Data processor may combine streak-suppressed multi-coil images to yield a streak-suppressed MR image259. When coil images260are k-space images, e.g., in the spatial-frequency domain, data processor205inverts coil images260, via a Fourier transform or its inverse, for example, to recover magnitude and/or phase parameters for each voxel243. The recovered magnitude and phase voxels for each channel are then processed to yield image data259. FIG.3is a schematic plan view of subject301with a plurality of coil positions348superimposed thereon. In embodiments, each coil position348corresponds to a location of a respective RF coil248. For clarity of illustration, not all coil positions348are enumerated inFIG.3. FIG.4illustrates coil images460(1),460(2), and460(M), each of which are examples of a respective coil image260. Coil images460(1) and460(M) contain strong radial streaks emanating from a region464, which includes a cross-sectional MR image of the right arm of subject201.FIG.4also illustrates streak-suppressed multi-coil images470(1),470(2), and470(M) each of which are examples of a respective streak-suppressed multi-coil image270. In embodiments, data processor205generates streak-suppressed multi-coil images470from coil images460by processing coil images using equations (1)-(5) described below. Each image470(k) may be represented by a two-dimensional single-coil matrix Xk, which is a px×pyarray. Each element of single-coil matrix Xkrepresents image data at a respective coordinate (x,y) within a cross-sectional plane of subject201. Index k is a positive integer less than or equal to M. Matrices Xk=1, 2, . . . , Mmay be combined to form an image stackX=[X1, X2, . . . , XM], which is a three-dimensional array with dimensions px×py×M. Given at least one region of interest (denoted as Ω) corresponding to the source(s) of the streaks, an interference correlation matrix is estimated using eq. (1). In eq. (1), |⋅| denotes the cardinality of a set, H denotes the conjugate transpose, and s denotes a plurality of coordinates (x,y) in the region of interest. Coordinates (x, y) correspond to a location within a cross-sectional plane of subject201. In equations (1)-(4), uppercase bold letters denote matrices, and lowercase bold letters denote vectors. Ci=1❘"\[LeftBracketingBar]"Ω❘"\[RightBracketingBar]"⁢∑s∈ΩX_(s)⁢X_(s)H(1)Ci=Q⁢D⁢QH(2)Q=[e1,e2,…,eγ❘er+1,…,eN]=[Qr|QN-r](3)P=I-Qr⁢QrH(4)d⊥=P⁢d(5) An eigenanalysis of matrix Civia eq. (2) yields eigenvectors e that span the space comprising two orthogonal subspaces: an interference subspace Qrand an interference null space QN-rshown in eq. (3). Matrix Q is a modal matrix of subspace-eigenvectors e. The first r eigenvectors span the interference subspace and the remaining (N−r) eigenvectors span the interference null space. QrHis the Hermitian transpose of Qr. D is a diagonal matrix. Equation (4) is an expression for a projection matrix P onto the interference null space, which may be applied either in the spatial domain or in the spatial-frequency domain (k-space). Eq. (5) shows projection matrix P operating on a vector or matrix d. In a first example, d represents a single-coil image, such coil image260in either the spatial domain or spatial-frequency domain. Preprocessed coil image d⊥is the projection of the coil data into the interference null space. In a second example, d includes k-space representations of each of single-coil matrices Xk=1, 2, . . . , M. When the k-space representation of each single-coil matrix Xkis a two-dimensional array with dimensions pvx×pvy, then d or its transpose is a two-dimensional array with (pvx·pvy) rows and M columns. In a third example, d includes spatial-domain representations of each single-coil matrices Xk=1, 2, . . . , M, and d or its transpose is a two-dimensional array with (px·py) rows and M columns. In the aforementioned second and third examples, d may be a Casorati matrix of single-coil matrices [X1, X2, . . . , XM]. Applying the projection in k-space eliminates streaks due to gradient nonlinearities from the k-space data, which may subsequently be used to generate streak-suppressed multi-coil image270. In embodiments, streak-suppressed multi-coil image270is an image that minimizes, to a minimization tolerance, a difference between (i) a product of a modal matrix of subspace-eigenvectors and the streak-suppressed multi-coil image and (ii) the preprocessed coil image. For example, streak-suppressed multi-coil image270may be determined using iterative reconstructions expressed in eq. (6). m^=argminm⁢f⁡(E,m,P,d)+λ⁢R⁡(m)(6) Image {circumflex over (m)} is an example of a streak-suppressed multi-coil image270, and is the value of image-data array m that minimizes objective function ƒ(⋅). In eq. (6) E denotes an encoding matrix, P is the projection matrix of eq. (4), λ is a weighting parameter, and R is a regularization operator. When data processor205executes eq. (6), the minimizing function may be subject to a predetermined tolerance, e.g., one or more stopping criteria. Objective function ƒ(⋅) may be a data consistency function, i.e., one that returns a value that indicates consistency of its solution with measured data. In embodiments, the argument to objective function ƒ(⋅) is |Em−Pd|, such that image {circumflex over (m)} is the value of image-data array m that minimizes a difference Em and Pd. For example, ƒ(E, m, P, d) may be an entry-wise matrix norm. In a first example, the matrix norm may be expressed as ∥A∥p=(Σi=1mΣj=1n|ai,j|p)1/p, where p≥1 and Aσ(Em−Pd) and ai,jis the entry of matrix A in row i and column j. When p=2 the norm is Frobenius norm. In a second example, ƒ(E, m, P, d) may be an Lp,qnorm: ∥A∥p,q=(Σi=1m(Σj=1n|A|q/p)1/p, where p≥1 and q≥1. In a third example, ƒ(E, m, P, d) may equal max1≤j≤n∑i=1m❘"\[LeftBracketingBar]"ai⁢j❘"\[RightBracketingBar]"⁢ormax1≤i≤m∑j=1n❘"\[LeftBracketingBar]"ai⁢j❘"\[RightBracketingBar]". Encoding matrix E includes a combination of a Fourier transform and coil sensitivity encoding. When MRI system200operates to generate multi-contrast images, encoding matrix E may also include a temporal projection. In embodiments, processor205generates streak-suppressed multi-coil image270by performing an iterative reconstruction on k-space images d, rather than on preprocessed coil images d⊥. μ^=argminm⁢f⁡(E,m,d)+λ⁢R⁡(m)(7)μ^⊥=P⁢μ^(8) FIG.5is a functional block diagram of an MRI system500that forms MR image259of subject201. MRI system500is an example of MRI system200and includes data processor205and an MRI scanner590, which is an example of MRI scanner290,FIG.2. MRI system500implements magnet architecture240as a magnet544and gradient coils546. In embodiments, MRI system500includes at least one of a data acquisition system522, an RF system524, and RF detector528. MRI system500may further include a gradient amplifier526. In embodiments, pulse programmer224includes a pulse sequence optimizer523for optimizing pulse sequence parameters225for a given type of MR measurement. RF system524generates RF signals for RF coils248according to pulse sequence parameters225. RF system524may include an RF source and an RF amplifier. In MRI scanner590, magnet544produces a primary (or main) magnetic field parallel to the z-axis. Gradient coils546are capable of producing three orthogonal gradient fields capable of distorting the primary magnetic field in one or more directions spanned by axes x, y, and z. The gradient field is determined by pulse programmer224, which is electrically coupled to gradient coils546, optionally via gradient amplifier526. Pulse programmer224, and pulse sequence parameters225therein, determine the gradient fields' spatial distribution and amplitude. Pulse sequence parameters225relevant to controlling gradient coils546may include velocity-encoding gradient parameters and motion-encoding gradient parameters. Gradient amplifier526enables gradient coils546to produce sufficiently strong gradient fields to enable capture of MR images. RF detector528detects MR signals received by RF coils248and transmits them as MR signals202to data processor205via data acquisition system522. In some modes of operation, data acquisition system522may feedback at least a portion of MR signals202to pulse programmer224such that pulse programmer224may adjust gradient fields and transmitted RF signals in response to previous MR measurements. FIG.6is a schematic of a data processor605, which is an example of data processor205of MRI system200,FIG.2. Data processor605includes circuitry606that implements functionality of data processor605. In embodiments, circuitry606is, or includes an integrated circuit, such as an application-specific integrated circuit or a field-programmable gate array. Circuitry606executes several functions of data processor605described herein, which are represented by operators620. Each of operators620may be executed by one or more circuits of circuitry606. Operators620includes a correlator622, an eigensolver624, a mapper626, a preprocessor628, and an image reconstructor630. Operators620may also include at least one of an image generator621and an image combiner639. Image generator621generates coil images660from MR signals202. Each coil image660is an example of a coil image260. Coil images660include M coil images661, and may also include N coil images662, where N≤M. Coil images661may include at least one of coil images662. In embodiments, circuitry606includes at least one of a processor686and a memory608, which stores software609. Software609may include operators620, in which case each operator620includes machine readable instructions that are executed by processor686to implement functionality of data processor605. Software609may be firmware of circuitry606. Memory608may be transitory and/or non-transitory and may include one or both of volatile memory (e.g., SRAM, DRAM, computational RAM, other volatile memory, or any combination thereof) and non-volatile memory (e.g., FLASH, ROM, magnetic media, optical media, other non-volatile memory, or any combination thereof). Part or all of memory608may be integrated into processor686. Data processor605generates streak-suppressed multi-coil images670from MR signals202. Each streak-suppressed multi-coil image670is an example of a respective streak-suppressed multi-coil image270. To generate images670from signals202, at least one operator620generates a respective intermediate output640that is used by a subsequent operator620. Intermediate outputs640include correlation matrix642, subspace eigenvectors644, projection matrix646, and preprocessed coil images648, which are generated by correlator622, eigensolver624, mapper626, and preprocessor628respectively. When operating, circuitry606stores at least one of intermediate outputs640. In embodiments, at least one of:(a) image generator621generates a respective coil image660from each MR signal202;(b) correlator622executes eq. (1) to generate correlation matrix Ci, which is an example of correlation matrix642;(c) eigensolver624executes eq. (2) to yield eigenvectors e of equation (3), which are examples of subspace eigenvectors644;(d) mapper626executes eq. (4) to generate projection matrix P, which is an example of projection matrix646;(e) preprocessor628multiplies coil image x by projection matrix P per eq. (5) to yield preprocessed coil image x⊥, which is an example of a preprocessed coil image648. and(f) image reconstructor630executes eq. (6) for each coil image661or662to generate a respective image {circumflex over (m)}, which is an example of streak-suppressed multi-coil image670. FIG.7is a flowchart illustrating a method700for producing a streak-suppressed magnetic resonance image of a subject. Method700may be implemented within one or more aspects of data processor605,FIG.6. In embodiments, method700is implemented by processor686executing computer-readable instructions of software609. At one or more instances during the execution of method700, at least one of each intermediate outputs640is stored in a memory of circuitry606, such a memory608. Method700includes steps710,720,730,740, and750. Step710includes generating an interference correlation matrix from M coil images, each of the M coil images having been derived, e.g., reconstructed, from a respective one of M MR signals each detected by a respective one of a phased array of M coils of an MRI scanner. In embodiments, the MR signals originate in a first plurality of voxels of the subject corresponding to an artifact-region of a coil image corrupted by an artifact. In an example of step710, correlator622generates correlation matrix642from coil images661. Step720includes producing eigenvectors of the interference correlation matrix. The eigenvectors include a plurality of subspace-eigenvectors that span an interference subspace and a plurality of null-space-eigenvectors that span an interference null space. In an example of step720, eigensolver624generates subspace eigenvectors644from correlation matrix642. Step730includes determining, from the plurality of subspace-eigenvectors, a projection matrix of the interference null space. In an example of step730, mapper626determines projection matrix646from subspace-eigenvectors644. Step740includes preprocessing N coil images with the projection matrix to yield N preprocessed coil images. The quantity N is less than or equal to M of step710. Each of the N coil images is derived from a respective one of N MR signals each detected by a respective one of the phased array of M coils. In a first example of step740, preprocessor628pre-processes coil images661with projection matrix646to yield preprocessed coil images648. In this example, coil images660need not include coil images662, and the M MR signals and the M coil images of step710include the N MR signals and N coil images, respectively. In a second example of step740, coil images660include coil images662, and preprocessor628pre-processes coil images662with projection matrix646to yield preprocessed coil images648. In this example, the M MR signals and the M coil images of step710do not include the N MR signals and N coil images, respectively. For example, the M coil images of step710used to compute the interference correlation matrix may be from a previously acquired data set, and hence are not derived from the M MR signals referred to in step740. Several MRI datasets of the same subject may be acquired in sequence, for example. In embodiments, the M coil images may be a result of session, a “calibration scan,” which is performed to estimate coil sensitivities. The M coil images, and equivalently i(s) of eq. (1), may be derived from these previous scans, such as a calibration scan. Step750includes applying an image-reconstruction technique to the preprocessed coil images to obtain N reconstructed coil images. In an example of step750, image reconstructor630applies an image-reconstruction technique to preprocessed coil images648to yield streak-suppressed multi-coil images670. In embodiments, step750includes step752. Step752includes, for each preprocessed coil image, determining a streak-suppressed multi-coil image that minimizes, to a minimization tolerance, a difference between (i) a product of a modal matrix of subspace-eigenvectors and the streak-suppressed multi-coil image and (ii) the preprocessed coil image. In example of step752, image reconstructor630determines, from each preprocessed MR image648, a respective streak-suppressed multi-coil image670by implementing eq. (6). Method700may also include a step760, which includes combining the N streak-suppressed multi-coil images to yield the streak-suppressed MR image. In example of step760, image combiner639combines streak-suppressed multi-coil images670to yield a streak-suppressed MR image659. FIG.8is a schematic of a data processor805, which is an example of data processor205of MRI system200,FIG.2. Data processor805includes circuitry806that implements functionality of data processor605. In embodiments, circuitry806is, or includes an integrated circuit, such as an application-specific integrated circuit or a field-programmable gate array. Circuitry806executes several functions of data processor805described herein, which are represented by operations820. Each of operators820may be executed by one or more circuits of circuitry806. Operators820includes correlator622, eigensolver624, and mapper626, introduced above in the description of data processor605. Operators820also include an image reconstructor828, and a post-processor830. Operators820may also include at least one of image generator621and image combiner639. In embodiments, circuitry806includes at least one of processor686and memory608, which stores software809. Software809may include operators820, in which case each operator820includes machine readable instructions that are executed by processor686to implement functionality of data processor805. Software809may be firmware of circuitry806. Data processor805generates streak-suppressed multi-coil images870from MR signals202. Each streak-suppressed multi-coil image870is an example of a respective streak-suppressed multi-coil image270. To generate images870from signals202, at least one operator820generates a respective intermediate output840that is used by a subsequent operator820. When operating, circuitry806stores at least one of intermediate outputs840. Intermediate outputs840include correlation matrix642, subspace eigenvectors644, projection matrix646, and reconstructed coil images848. Image reconstructor828generates reconstructed coil images848. In embodiments, and for each coil image661or662, image reconstructor828executes eq. (7) to generate a respective image û, which is an example of reconstructed coil image848. From each reconstructed coil image848, post-processor830generates a respective streak-suppressed reconstructed multi-coil image870. In embodiments, and for each reconstructed coil image848post-processor830executes eq. (8) to generate a respective streak-suppressed reconstructed multi-coil image {circumflex over (μ)}⊥, which is an example of streak-suppressed reconstructed multi-coil image870. FIG.9is a flowchart illustrating a method900for producing a streak-suppressed magnetic resonance image of a subject. Method900may be implemented within one or more aspects of data processor805,FIG.8. In embodiments, method900is implemented by processor686executing computer-readable instructions of software809. At one or more instances during the execution of method700, at least one of each intermediate outputs840is stored in a memory of circuitry806, such a memory608. Method900includes steps710,720, and730of method700described above. Method900also includes steps940and950. Step940includes applying an image-reconstruction technique to the N coil images to yield N multi-coil complex images. In an example of step940, image reconstructor828applies an image-reconstruction technique to coil images661or662to yield reconstructed coil images848. In this example, image reconstructor828may employ eq. (7) as described above. Step950includes post-processing the N multi-coil complex images with the projection matrix to yield the N streak-suppressed multi-coil images. In an example of step950, post-processor830processes reconstructed coil images848to yield streak-suppressed multi-coil images870. In this example, post-processor830may employ eq. (8) as described above. Method700may also include step760introduced in the description of method700. In example of step760, image combiner639combines streak-suppressed multi-coil images870to yield a streak-suppressed MR image859. EXAMPLES Embodiments disclosed herein were evaluated using abdomen data acquired at 1.5T (Siemens, Aera) using a radial turbo spin echo pulse sequence (RADTSE, as in ref[3]) with TR=2500 ms, FA=150 deg, 192 views with 256 readout points/view, ETL=32 (to yield six views/TE), echo spacing=7.3 ms, slice thickness=8 mm, and FOV=40-46 cm. Streak removal was evaluated on (i) the composite images (where all radial views are used to reconstruct an image with an average TE contrast) using adaptive coil combine (ACC, ref. [4]) and on (ii) the T2 maps reconstructed from TE data sets using a model based CS approach (LLR, ref. [5]). For comparison, streak removal was also evaluated using auto coil selection (ACS) and B-STAR algorithms, refs. [1] and [2]. Quantitative metrics based on cancellation ratio (as in ref. [6]) as defined in eqs. (9)-(11) were used to compute the signal and streak cancellations. Eqs. (9)-(11) define a signal cancellation ratio (SCR), Interference Cancellation Ratio (ICR), and Signal to Noise Ratio Gain (SIRG) respectively. SCR=10×log10(wHRsw/wHRs′w)  (9) ICR=10×log10(wHRiw/wHRi′w)  (10) SIRG=SCR−ICR  (11) In the above equations, w=1n[1,1,…,1]T is the quiescent weight vector, Rsand Rs′are the signal correlation matrices before and after destreaking, and Riand Ri′are the interference correlation matrices before and after destreaking. The signal correlation matrix was estimated using a central rectangle region of interest that excludes all interference sources. The interference correlation matrix was estimated using all the interference source regions of interest. FIG.10shows an example of a composite image1060with strong streaks emanating from a region1064which includes an image of the subject's left arm.FIG.11shows image1060after application of streak removal using known methods (ACS, B-STAR), and method700at different levels of destreaking. The strength of destreaking was tuned via different number of pruning rounds (ACS), diagonal loading weights λ (B-STAR), and the rank of interference subspace (CACTUS). ACS and B-STAR images include residual streaking at region1110. ACS images include signal loss at region1120, and the strongly destreaked image produced by method700most successfully cancels the streaks in region1064, as denoted by region1164. FIG.12shows two cases with mild (subject 1) and strong (subject 2) streaks, which originate from multiple sources. In subject 1, ACS results in signal drop (region1210) whereas method700shows excellent streak suppression at the subject's arms (regions1230) without noticeable signal losses in the anatomy adjacent to the arms. In subject 2, the ACS and B-STAR approaches could not fully remove the streaks from the left arm (regions1210). By contrast, method700reduced the streaks from both arms without significant signal loss. FIG.13is tabulated data showing quantitative performance of these methods for the three cases showed inFIGS.11and12. Data inFIG.12includes signal cancellation region (SCR), Interference Cancellation Ratio (ICR), and Signal to Noise Ratio Gain (SIRG) expressed by eqs. (9), (10), and (11) respectively. ACS has the worst SCR, which suggesting an SNR decrease after coil removal. B-STAR preserves the signal better than ACS, but its ICR is several decibels worse than that of method700. Method700also achieves the best SIRG among the three methods. Combination of Features Features described above as well as those claimed below may be combined in various ways without departing from the scope hereof. The following enumerated examples illustrate some possible, non-limiting combinations: (A1) A method for producing a streak-suppressed MR image of a subject includes (i) generating an interference correlation matrix from M coil images, (ii) producing eigenvectors of the interference correlation matrix, and (iii) determining, from the plurality of subspace-eigenvectors, a projection matrix of the interference null space. Each of the M coil images were derived from a respective one of M MR signals each detected by a respective one of a phased array of M coils of an MRI scanner. The subspace-eigenvectors include a plurality of subspace-eigenvectors that span an interference subspace and a plurality of null-space-eigenvectors that span an interference null space. The method also includes generating, from N coil images derived from a respective one of N MR signals each detected by a respective one of the phased array of M coils, N streak-suppressed multi-coil images by either (i) preprocessing the N coil images with the projection matrix and applying an image-reconstruction technique to each of the resultant N preprocessed coil images, or (ii) applying an image-reconstruction technique to each of the N coil images to obtain N reconstructed coil images and post-processing the resultant N reconstructed coil images with the projection matrix. (A2) In embodiments of method (A1), the M coil images include the N coil images. (A3) In embodiments of method (A1), the M coil images do not include the N coil images. (A4) In embodiments of any one of methods (A1)-(A3), generating the N streak-suppressed multi-coil images from the N coil images includes: preprocessing each of the N coil images with the projection matrix to yield a respective one of N preprocessed coil images; and applying an image-reconstruction technique to the N preprocessed coil images to yield the N streak-suppressed multi-coil images. (A5) In embodiments of method (A4), the method includes determining a streak-suppressed multi-coil image that minimizes, to a minimization tolerance, a difference between (i) a product of a modal matrix of subspace-eigenvectors and the streak-suppressed multi-coil image and (ii) the preprocessed coil image. (A6) In embodiments of any one of methods (A1)-(A5), generating the N streak-suppressed multi-coil images from the N coil images includes: applying an image-reconstruction technique to the N coil images to yield N multi-coil complex images, and post-processing the N multi-coil complex images with the projection matrix to yield the N streak-suppressed multi-coil images. (A7) In embodiments of any one of methods (A1)-(A6), the projection matrix is proportional to I−QrHDQr, where I is an identity matrix, Qris a matrix of a number r eigenvectors that span the interference subspace, QrHis the Hermitian transpose of Qr, and D is a diagonal matrix. (A8) Embodiments of any one of methods (A1)-(A7) further include the method includes combining the N streak-suppressed multi-coil images to yield the streak-suppressed MR image. (A9) In embodiments of any one of methods (A1)-(A8), in said step of generating, the M MR signals originate in a first plurality of voxels of the subject corresponding to an artifact-region of a coil image corrupted by an artifact, and coordinates (x, y) correspond to a location within a cross-sectional plane of the subject. (B1) A magnetic resonance imaging system includes circuitry that executes any one of methods (A1)-(A9). (B2) In embodiments of system (B1), the circuitry includes one of an application-specific integrated circuit and a field-programmable gate array. (B3) In embodiments of system (B1) the circuitry includes, a processor; and a memory storing machine readable instructions, that when executed by the processor, control the processor to execute the any one of methods (A1)-(A9). REFERENCES [1] Grimm, R., Forman, C., Hutter, J., Kiefer, B., Horn egger, J., & Block, T. (2013). Fast automatic coil selection for radial stack-of-stars GRE imaging. InProceedings of the21st Annual Meeting of ISMRM(p. 3786).[2] Mandava, 5., Keerthivasan, M. B., Martin, D. R., Altbach, M. I., & Bilgin, A. (2019). Radial streak artifact reduction using phased array beamforming.Magnetic Resonance in Medicine,81(6), 3915-3923.[3] Altbach, M. I., Bilgin, A., Li, Z., Clarkson, E. W., Trouard, T. P., & Gmitro, A. F. (2005). Processing of radial fast spin-echo data for obtaining T2 estimates from a single k-space data set.Magnetic Resonance in Medicine,54(3), 549-559.[4] Walsh, D. O., Gmitro, A. F., & Marcellin, M. W. (2000). Adaptive reconstruction of phased array MR image ry.Magnetic Resonance in Medicine,43(5), 682-690.[5] Tamir, J. I., Uecker, M., Chen, W., Lai, P., Alley, M. T., Vasanawala, 5.5., & Lustig, M. (2017). T2 shuffling: sharp, multi contrast, volumetric fast spin-echo imaging.Magnetic resonance in medicine,77(1), 180-195.[6] Haimovich, A. M., & Bar-Ness, Y. (1991). An eigenanalysis interference canceler.IEEE Transactions on signal processing,39(1), 76-84. Changes may be made in the above methods and systems without departing from the scope of the present embodiments. It should thus be noted that the matter contained in the above description or shown in the accompanying drawings should be interpreted as illustrative and not in a limiting sense. Herein, and unless otherwise indicated the phrase “in embodiments” is equivalent to the phrase “in certain embodiments,” and does not refer to all embodiments. The following claims are intended to cover all generic and specific features described herein, as well as all statements of the scope of the present method and system, which, as a matter of language, might be said to fall therebetween.
33,054
11860257
Similar elements are designated with the same reference signs in the drawing. DETAILED DESCRIPTION FIG.1schematically shows a magnetic resonance (MR) apparatus1. The MR apparatus1has an MR data acquisition scanner2with a basic field magnet3that generates the constant magnetic field, a gradient coil arrangement5that generates the gradient fields, one or several radio-frequency (RF) antennas7for radiating and receiving radio-frequency signals, and a control computer9configured to perform the method. InFIG.1such sub-units of the magnetic resonance apparatus1are only outlined schematically. The radio-frequency antennas7may include a coil array comprising at least two coils, for example the schematically shown coils7.1and7.2, which may be configured either to transmit and receive radio-frequency signals or only to receive the radio frequency signals (MR signals). In order to acquire MR data from an examination object U, for example a patient or a phantom, the examination object U is introduced on a bed B into the measurement volume of the scanner2. The slab S is an example of a 3D slab of the examination object, from which MR data can be acquired using a method according to an embodiment of the present invention. The control computer9centrally controls the magnetic resonance apparatus1, and can control the gradient coil arrangement5with a gradient controller5′ and the radio-frequency antenna7with a radio-frequency transmit/receive controller7′. The radio-frequency antenna7has multiple channels corresponding to the multiple coils7.1,7.2of the coil arrays, in which signals can be transmitted or received. The radio-frequency antenna7together with its radio-frequency transmit/receive controller7′ is responsible for generating and radiating (transmitting) a radio-frequency alternating field for manipulating the nuclear spins in a region to be examined (in particular in the slab S) of the examination object U. The control computer9also has an imaging protocol processor15that determines the reordering pattern according to an embodiment of the present invention. A control unit13of the control computer9is configured to execute all the controls and computation operations required for acquisitions. Intermediate results and final results required for this purpose or determined in the process can be stored in a memory11of the control computer9. The units shown here should not necessarily be considered as physically separate units, but simply represent a subdivision into functional units, which can also be implemented by fewer physical units, or just one. A user can enter control commands into the magnetic resonance apparatus1and/or view displayed results, for example image data, from the control computer9via an input/output interface E/A. A non-transitory data storage medium26can be loaded into the control computer9, and may be encoded with programming instructions (program code) that cause the control computer9, and the various functional units thereof described above, to implement any or all embodiments of the method according to embodiments of the present invention, as also described above. FIG.2shows a sequence diagram illustrating a Fast Spin-Echo sequence, in which a 90° pulse is followed by a train of 180° refocusing pulses, as shown on the line names “RF”. For illustration purpose, this a 2D sequence, therefore the RF pulses are transmitted concurrently with a slice select gradient Gss. The train of refocusing RF pulses leads to an echo train e1, e2, e3, . . . show in the signal row. Each of these echoes is used to acquire one k-space line12in the two-dimensional k-space10, wherein echo e1corresponds to line L1, line e2corresponds to line L2, echo e3corresponds to k-space line L3etc. In order to distribute the k-space lines12around the two-dimensional k-space10, phase encode gradients GPEare used, which are incrementally changed for each echo. During acquisition, a readout gradient GROis applied. The echoes e1, e2, . . . , e8have a signal intensity which diminishes over the echo train due to T2 relaxation. The contrast of the final image is determined by the echo which is acquired in the centre of k-space, in this case e5corresponding to L5. The time after the 90° pulse of this echo determines the effective echo time, TEeff. If a Fast Spin-Echo imaging protocol is performed in 3D, the slice select gradient may be applied only once during the 90° pulse, in order to select one slab S. The further slice select gradients are replaced by a further phase encode gradient in a direction orthogonal to the 2D phase encode and the slice select gradient, so that phase encoding is performed in two spatial directions, leading to the distribution of the k-space lines12across a volume, rather than a plane10. FIG.3illustrates such three-dimensional k-space14having directions kxin readout direction, and kyand kzin the phase encode plane20. A k-space line acquired during one echo is illustrated at12. The k-space volume14is divided into a central region16and a periphery18. Since the full acquisition in readout direction does not cost additional imaging time, usually the central region16will extend along the full length of the volume in readout direction kx. However, in the phase encode plane20, which in this illustration is oriented in the plane of the paper, the central region16covers only about 1/9 of the total square phase encode plane20. The illustrated k-space line12is in the periphery18. According to an embodiment of the present invention, the pattern or sampling order in which k-space lines12are acquired in the central region16and in the periphery18are different from one another. In particular, each echo train must comprise one k-space line in the central region16, whereas this is not true for regions of equal or comparable size to the central region, which are situated in the periphery18. An example sampling order is illustrated inFIG.4, which shows a view onto the phase encode plane20(the readout direction is perpendicular to the plane of the drawing). The numbers1-16indicate shots or echo trains, i.e. there are 16 shots. During each shot, one phase-encode gradient (here ky) is changed incrementally from one echo to the next, so that the k-space is swept through linearly in kydirection. However, the phase encoding in the orthogonal direction, here kz, differs significantly from a linear encoding pattern, in that each shot has one k-space line in the central region16, which in total covers 4×4=16 k-space lines. Evidently, this is just an illustration, as there will usually be more than 16 shots and 16 steps in kyresulting in more than a total of 16×16=256 k-space lines. Thus, in the periphery18, the areas19, which are the areas along one phase encode direction (here ky) which are outside the central region, are sampled according to a purely linear sampling pattern. In the regions17, which are outside the central region16, but on the same height in kydirection, the sampling is not entirely linear, but somewhat compressed to leave space for the central region16.FIG.4is just an illustration of a possible sampling pattern and many different implementations are possible. FIGS.5to8further illustrate different sampling patterns/reordering schemes:FIGS.5and6show a linear reordering, whereasFIGS.7and8show a linear plus checkered reordering. The figures show a view onto the phase encode plane20, as shown inFIG.4. However, the number of the shots and the number of the echoes is greyscale-encoded: inFIGS.5and7, the colour or greyscale indicates the shot number, with the 1thshot shown black and the 48thshot shown in white. A section30of the central k-space region16is shown enlarged in the bottom part ofFIGS.5and7. InFIGS.6and8, by contrast, the greyscale indicates the echo number within one shot, wherein the first echo at each shot is coloured black and the last echo is coloured white.FIGS.5and6illustrate a linear sampling order, in which the central region16, which appears relatively uniformly grey, is only covered by about shots number20-30. On the other hand, shots1-20and shots30-48do not go through the central region and therefore have no overlap with, for example, a low-resolution scout image.FIGS.7and8, on the other hand, show a sampling order according to an embodiment of the present invention, namely linear plus checkered. This combines jittered checkerboard sampling in the central region16of k-space with linear sampling across the remainder of k-space. This retains the desirable linear signal evolution along ky, and in non-steady-state-sequences will lead to similar contrast and blurring as obtained from a purely linear reordering. The nature of the checkered sampling order in the centre of k-space is well visible from the enlarged section30. The central region16is further divided into a number of rectangular tiles32, wherein 4 tiles32are visible in the enlarged section30. Each tile32includes one k-space line from each of the total number of shots (in this case48). By contrast, if the whole of k-space were acquired in a checkered reordering, the signal evolution within each shot might follow a linear trend, but may exhibit undesirable steps. A further embodiment is shown inFIG.9, in which a radial reordering in the central region16of k-space is combined with a linear reordering in the periphery. The grey scale again illustrates the shot number, with black showing the 1stshot and white the 48thshot. Also, with this reordering, the central region of k-space is covered by all shots. A still further implementation is illustrated inFIG.10, which shows a checkered reordering in the central region16of k-space, combined with a radial reordering in the periphery18. The retrospective motion correction technique will now be illustrated with reference toFIG.11. The mathematical model used is an extension of SENSE parallel imaging, as described in the above-cited paper by K. P. Pruessmann et al., with rigid-body motion parameters included into the forward model. The encoding operator Eθfor a given patient motion trajectory θ relates the motion free image x to the acquired multi-channel k-space data s.FIG.11illustrates the mathematical components which contribute to the encoding at each shot. Note, that for each shot i the subject's position is described by a new set of six rigid-body motion parameters θ1that describe the 3D position of the object. Accordingly, the multi-channel k-space data sifor a given shot i may be related to the 3D image volume x through image rotations Ri, image translations Ti, coil sensitivity maps C, Fourier operator F and under-sampling mask Miby the following formula 1 si=Eθix=MiFCTθiRθix[1] Using an ultra-fast low-resolution scout scan, the method according to an embodiment of the present invention creates an efficient method for directly estimating the motion trajectory θ, thus completely avoiding time-consuming alternating optimization between the image vector (formula 2) and the motion vector (formula 3): [{circumflex over (x)}]=argminx∥E{circumflex over (θ)}x−s∥2[2] [{circumflex over (θ)}]=argminθ∥Eθ{circumflex over (x)}−s∥2[3] Prior art methods require repeated updates of the coupled optimization variables x and θ, using the formulas 2 and 3. This can lead to convergence issues as updates of x and θ will be computed on inaccurate information. Moreover, the reconstruction is computationally demanding as repeated updates of x (millions of imaging voxels) are needed If, however, a low resolution scout image is acquired, the scout {tilde over (x)} approximates the motion-free image volume {circumflex over (x)} and each motion state can be determined independently by minimizing the data consistency error of the forward model: [{circumflex over (θ)}i]=argminθi∥Eθi{tilde over (x)}−si∥2[4] For the final image reconstruction, the individual motion states from each shot are combined and the motion-mitigated image is obtained from solving a standard least-squares problem: [{circumflex over (x)}]=argminx∥E{circumflex over (θ)}x−s∥2[5] This strategy completely avoids the difficult non-linear and non-convex joint optimization that contains millions of unknowns, as it only considers six rigid body parameters per motion optimization, and it does not require computationally costly full or partial updates to the image. This framework may also be extended to Wave-CAIPI encoding. This method exploits available information in modern multi-channel receivers and may provide up to R=9-fold speedup for many important clinical contrasts. The sinusoidal gradients in Wave-encoding lead to a spatially varying phase that is applied along the read-out in hybrid space. Using the notation from the encoding model of formula [1] Eθi=MiFy,zPyzFxCTθiRθi[6] where the Fourier transform has been modified to contain the Wave point-spread-function Pyz. In Vivo Experiments A sequence reordering according toFIGS.7and8, i.e. linear+checkered reordering was introduced into a T2-weighted SPACE sequence as well as an MPRAGE sequence, which were acquired at 1 mm3isotropic resolution and R=4 acceleration. A contrast-matched, low-resolution T2-weighted scout scan (1×4×4 mm3resolution and R=4-fold acceleration) preceded every imaging scan. The method was also extended to Wave-CAIPI. In vivo scans of the head were performed on a healthy subject using a 3D-scanner and a 32-channel head coil. The subject was instructed to move throughout the scan. The details of the imaging protocols and scout scans are shown in the below table 1: TABLE 1Imaging and scout acquisition parameters for MPRAGE and SPACE. The scout acquisitionincludes a 2 s external GRE reference scan which was used to estimate coilsensitivity maps. In all SPACE acquisitions (scout and imaging scan) a separatedummy shot was obtained to achieve steady-state magnetization.MPRAGEWave MPRAGET2w SPACESPACE-FLAIRScoutResolution [mm]1 × 4 × 41 × 4 × 41 × 4 × 41 × 4 × 4scanAcceleration4 × 34 × 34 × 34 × 3Turbo factor256256256256Acquisition time [min]0:040:040:080:12ImagingResolution [mm]1 × 1 × 11 × 1 × 11 × 1 × 11 × 1 × 1scanAcceleration2 × 23 × 23 × 32 × 22 × 2Turbo factor192192240240Acquisition time [min]2:401:451:102:374:05Scout &TEeff/TI/TR [ms]3.5/1100/25003.5/1100/2500104/—/3200104/—/5000imagingFOV [mm]256 × 256 × 192256 × 256 × 192256 × 256 × 192256 × 256 × 192scanBandwidth [Hz/px]200200592592Wave amp. [mT/m]—8——Wave #cycles—17—— To reduce the computational footprint of the motion optimization, coil compression was employed, i.e. the multi-channel k-space data was compressed to a lower number of coils using SVD compression. The minimisation may be performed using MATLAB's fminunc, which is a standard implementation of a quasi-newton optimisation algorithm for non-linear programming. Alternatively, a custom gradient descent optimizer was used for each shot i according to the pseudo code provided below. In this optimization, after initialization of the motion values, the gradient ∇θiof the motion values is computed using finite differences. Next, the optimal step size Δ{tilde over (s)} for the gradient update is estimated by sampling data-consistency (dc) errors across a small set of possible step sizes. A second order polynomial fit is used to identify the step size Δ{tilde over (s)} with the lowest model error and the motion vector is updated accordingly θi→θi+Δ{tilde over (s)}∇θi. This process repeats until convergence is reached, e.g., while iteration k<kmaxor data consistency improvement Δ∈<Δ∈min, wherein kmaxis a predetermined maximum number of iterations for each optimisation step. The performance of the optimizer was determined by comparing the number of forward model evaluations required to achieve a desired level of motion estimation accuracy. While shot i<Nsh Initialize motion values θi. While k<kmax& Δ∈<Δ∈min1. Calculate gradient ∇θiusing finite differences2. Estimate step size Δ{tilde over (s)}a. Compute data-consistency (dc) error across a small set of possible step sizesb. Fit second order polynomial across dc errorsc. Find step-size Δ{tilde over (s)} with minimum dc error3. Project and update motion parameters: θi→θi+Δ{tilde over (s)}∇Λi4. Compute dc error: ∈i=∥Eθi{tilde over (x)}−si∥ The results of the motion estimation for several sequence reorderings are shown inFIGS.12and13.FIG.12shows the ground truth of the motion, on top the translation in mm, on the bottom the rotation in °, which the subject performed during the scan. Several sampling patterns were investigated and their ability to perform accurate motion estimation and correction was analysed, namely a purely linear sequence reordering, a purely radial reordering, a purely checkered reordering and a linear+checkered reordering (according to an embodiment of the present invention). The results are shown inFIG.13. The errors in the motion correction for translation and rotational vectors are shown for the linear reordering40, the radial reordering42, a checkered reordering44, and a linear+checkered reordering46, as illustrated inFIG.7. The most commonly used linear reordering scheme suffers from estimation inaccuracy due to insufficient spectral frequency overlap between the low-resolution scout and the acquired k-space data. Especially near the boundaries of k-space, missing overlap between the scout and the acquired data causes the fully separable motion optimisation to become infeasible. Accordingly, for these shots, effectively no motion estimation was possible. As the distribution of k-space samples per shot broadens, the motion estimations accuracy improves. For example, in radial reordering42, each shot has some overlap with the low-resolution region of k-space that the scout image occupies, which generally resulted in better motion estimation accuracy. However, deviations between the estimated and ground truth parameters were observed for some of the translation values. This is because spectral frequency support is limited to the radial ky, kzsampling direction, but unavailable in the orthogonal direction. The most accurate motion estimates were obtained from checkered44and linear+checkered46reordering, as spectral frequency support is now provided along all three spatial dimensions, yielding negligible motion estimation arrays across all shots. Beside the emotion estimation ability, the sampling pattern also affects the convergence of the image reconstruction. Compared to the other sampling patterns, the linear+checkered sampling scheme converges more rapidly, i.e. after fewer iterations, than a reconstruction with checkered sampling. This can also be seen in the reconstructed images, where linear+checkered sampling had fewer artifacts then checkered reordering. FIG.14shows a flow diagram of a method according an embodiment of the present invention. In step60, a low-resolution scout image is acquired, and in step62, the multi-shot 3D-imaging protocol is used to acquire k-space data of the object. Steps60and62may also be done in reverse order, i.e. the scout image may be acquired after the high-resolution image data. The scout image is reconstructed using a parallel imaging reconstruction method, but without motion correction, to obtain a low-resolution 3D image in step63. This image and the multi-channel k-space data are then used in step64, where motion estimation is carried out according to formula [4], wherein the low-resolution scout image is used as an estimate for x and the motion parameters are estimated using an optimisation for each shot. In step66, these estimated motion parameters are then used on the acquired multi-channel k-space data, and the motion-corrected 3D image is reconstructed, for example according to formula [5]. In step68, the motion-corrected image may be displayed to a user, for example on the screen of an E/A device according toFIG.1. The drawings are to be regarded as being schematic representations and elements illustrated in the drawings are not necessarily shown to scale. Rather, the various elements are represented such that their function and general purpose become apparent to a person skilled in the art. Any connection or coupling between functional blocks, devices, components, or other physical or functional units shown in the drawings or described herein may also be implemented by an indirect connection or coupling. A coupling between components may also be established over a wireless connection. Functional blocks may be implemented in hardware, firmware, software, or a combination thereof. It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, components, regions, layers, and/or sections, these elements, components, regions, layers, and/or sections, should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of example embodiments. As used herein, the term “and/or,” includes any and all combinations of one or more of the associated listed items. The phrase “at least one of” has the same meaning as “and/or”. Spatially relative terms, such as “beneath,” “below,” “lower,” “under,” “above,” “upper,” and the like, may be used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as “below,” “beneath,” or “under,” other elements or features would then be oriented “above” the other elements or features. Thus, the example terms “below” and “under” may encompass both an orientation of above and below. The device may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly. In addition, when an element is referred to as being “between” two elements, the element may be the only element between the two elements, or one or more other intervening elements may be present. Spatial and functional relationships between elements (for example, between modules) are described using various terms, including “on,” “connected,” “engaged,” “interfaced,” and “coupled.” Unless explicitly described as being “direct,” when a relationship between first and second elements is described in the disclosure, that relationship encompasses a direct relationship where no other intervening elements are present between the first and second elements, and also an indirect relationship where one or more intervening elements are present (either spatially or functionally) between the first and second elements. In contrast, when an element is referred to as being “directly” on, connected, engaged, interfaced, or coupled to another element, there are no intervening elements present. Other words used to describe the relationship between elements should be interpreted in a like fashion (e.g., “between,” versus “directly between,” “adjacent,” versus “directly adjacent,” etc.). The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms “a,” “an,” and “the,” are intended to include the plural forms as well, unless the context clearly indicates otherwise. As used herein, the terms “and/or” and “at least one of” include any and all combinations of one or more of the associated listed items. It will be further understood that the terms “comprises,” “comprising,” “includes,” and/or “including,” when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. Expressions such as “at least one of,” when preceding a list of elements, modify the entire list of elements and do not modify the individual elements of the list. Also, the term “example” is intended to refer to an example or illustration. It should also be noted that in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may in fact be executed substantially concurrently or may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which example embodiments belong. It will be further understood that terms, e.g., those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein. It is noted that some example embodiments may be described with reference to acts and symbolic representations of operations (e.g., in the form of flow charts, flow diagrams, data flow diagrams, structure diagrams, block diagrams, etc.) that may be implemented in conjunction with units and/or devices discussed above. Although discussed in a particularly manner, a function or operation specified in a specific block may be performed differently from the flow specified in a flowchart, flow diagram, etc. For example, functions or operations illustrated as being performed serially in two consecutive blocks may actually be performed simultaneously, or in some cases be performed in reverse order. Although the flowcharts describe the operations as sequential processes, many of the operations may be performed in parallel, concurrently or simultaneously. In addition, the order of operations may be re-arranged. The processes may be terminated when their operations are completed, but may also have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, subprograms, etc. Specific structural and functional details disclosed herein are merely representative for purposes of describing example embodiments. The present invention may, however, be embodied in many alternate forms and should not be construed as limited to only the embodiments set forth herein. Units and/or devices according to one or more example embodiments may be implemented using hardware, software, and/or a combination thereof. For example, hardware devices may be implemented using processing circuity such as, but not limited to, a processor, Central Processing Unit (CPU), a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable gate array (FPGA), a System-on-Chip (SoC), a programmable logic unit, a microprocessor, or any other device capable of responding to and executing instructions in a defined manner. Portions of the example embodiments and corresponding detailed description may be presented in terms of software, or algorithms and symbolic representations of operation on data bits within a computer memory. These descriptions and representations are the ones by which those of ordinary skill in the art effectively convey the substance of their work to others of ordinary skill in the art. An algorithm, as the term is used here, and as it is used generally, is conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of optical, electrical, or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be borne in mind that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise, or as is apparent from the discussion, terms such as “processing” or “computing” or “calculating” or “determining” of “displaying” or the like, refer to the action and processes of a computer system, or similar electronic computing device/hardware, that manipulates and transforms data represented as physical, electronic quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices. In this application, including the definitions below, the term ‘module’ or the term ‘controller’ may be replaced with the term ‘circuit.’ The term ‘module’ may refer to, be part of, or include processor hardware (shared, dedicated, or group) that executes code and memory hardware (shared, dedicated, or group) that stores code executed by the processor hardware. The module may include one or more interface circuits. In some examples, the interface circuits may include wired or wireless interfaces that are connected to a local area network (LAN), the Internet, a wide area network (WAN), or combinations thereof. The functionality of any given module of the present disclosure may be distributed among multiple modules that are connected via interface circuits. For example, multiple modules may allow load balancing. In a further example, a server (also known as remote, or cloud) module may accomplish some functionality on behalf of a client module. Software may include a computer program, program code, instructions, or some combination thereof, for independently or collectively instructing or configuring a hardware device to operate as desired. The computer program and/or program code may include program or computer-readable instructions, software components, software modules, data files, data structures, and/or the like, capable of being implemented by one or more hardware devices, such as one or more of the hardware devices mentioned above. Examples of program code include both machine code produced by a compiler and higher level program code that is executed using an interpreter. For example, when a hardware device is a computer processing device (e.g., a processor, Central Processing Unit (CPU), a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a microprocessor, etc.), the computer processing device may be configured to carry out program code by performing arithmetical, logical, and input/output operations, according to the program code. Once the program code is loaded into a computer processing device, the computer processing device may be programmed to perform the program code, thereby transforming the computer processing device into a special purpose computer processing device. In a more specific example, when the program code is loaded into a processor, the processor becomes programmed to perform the program code and operations corresponding thereto, thereby transforming the processor into a special purpose processor. Software and/or data may be embodied permanently or temporarily in any type of machine, component, physical or virtual equipment, or computer storage medium or device, capable of providing instructions or data to, or being interpreted by, a hardware device. The software also may be distributed over network coupled computer systems so that the software is stored and executed in a distributed fashion. In particular, for example, software and data may be stored by one or more computer readable recording mediums, including the tangible or non-transitory computer-readable storage media discussed herein. Even further, any of the disclosed methods may be embodied in the form of a program or software. The program or software may be stored on a non-transitory computer readable medium and is adapted to perform any one of the aforementioned methods when run on a computer device (a device including a processor). Thus, the non-transitory, tangible computer readable medium, is adapted to store information and is adapted to interact with a data processing facility or computer device to execute the program of any of the above mentioned embodiments and/or to perform the method of any of the above mentioned embodiments. Example embodiments may be described with reference to acts and symbolic representations of operations (e.g., in the form of flow charts, flow diagrams, data flow diagrams, structure diagrams, block diagrams, etc.) that may be implemented in conjunction with units and/or devices discussed in more detail below. Although discussed in a particularly manner, a function or operation specified in a specific block may be performed differently from the flow specified in a flowchart, flow diagram, etc. For example, functions or operations illustrated as being performed serially in two consecutive blocks may actually be performed simultaneously, or in some cases be performed in reverse order. According to one or more example embodiments, computer processing devices may be described as including various functional units that perform various operations and/or functions to increase the clarity of the description. However, computer processing devices are not intended to be limited to these functional units. For example, in one or more example embodiments, the various operations and/or functions of the functional units may be performed by other ones of the functional units. Further, the computer processing devices may perform the operations and/or functions of the various functional units without sub-dividing the operations and/or functions of the computer processing units into these various functional units. Units and/or devices according to one or more example embodiments may also include one or more storage devices. The one or more storage devices may be tangible or non-transitory computer-readable storage media, such as random access memory (RAM), read only memory (ROM), a permanent mass storage device (such as a disk drive), solid state (e.g., NAND flash) device, and/or any other like data storage mechanism capable of storing and recording data. The one or more storage devices may be configured to store computer programs, program code, instructions, or some combination thereof, for one or more operating systems and/or for implementing the example embodiments described herein. The computer programs, program code, instructions, or some combination thereof, may also be loaded from a separate computer readable storage medium into the one or more storage devices and/or one or more computer processing devices using a drive mechanism. Such separate computer readable storage medium may include a Universal Serial Bus (USB) flash drive, a memory stick, a Blu-ray/DVD/CD-ROM drive, a memory card, and/or other like computer readable storage media. The computer programs, program code, instructions, or some combination thereof, may be loaded into the one or more storage devices and/or the one or more computer processing devices from a remote data storage device via a network interface, rather than via a local computer readable storage medium. Additionally, the computer programs, program code, instructions, or some combination thereof, may be loaded into the one or more storage devices and/or the one or more processors from a remote computing system that is configured to transfer and/or distribute the computer programs, program code, instructions, or some combination thereof, over a network. The remote computing system may transfer and/or distribute the computer programs, program code, instructions, or some combination thereof, via a wired interface, an air interface, and/or any other like medium. The one or more hardware devices, the one or more storage devices, and/or the computer programs, program code, instructions, or some combination thereof, may be specially designed and constructed for the purposes of the example embodiments, or they may be known devices that are altered and/or modified for the purposes of example embodiments. A hardware device, such as a computer processing device, may run an operating system (OS) and one or more software applications that run on the OS. The computer processing device also may access, store, manipulate, process, and create data in response to execution of the software. For simplicity, one or more example embodiments may be exemplified as a computer processing device or processor; however, one skilled in the art will appreciate that a hardware device may include multiple processing elements or processors and multiple types of processing elements or processors. For example, a hardware device may include multiple processors or a processor and a controller. In addition, other processing configurations are possible, such as parallel processors. The computer programs include processor-executable instructions that are stored on at least one non-transitory computer-readable medium (memory). The computer programs may also include or rely on stored data. The computer programs may encompass a basic input/output system (BIOS) that interacts with hardware of the special purpose computer, device drivers that interact with particular devices of the special purpose computer, one or more operating systems, user applications, background services, background applications, etc. As such, the one or more processors may be configured to execute the processor executable instructions. The computer programs may include: (i) descriptive text to be parsed, such as HTML (hypertext markup language) or XML (extensible markup language), (ii) assembly code, (iii) object code generated from source code by a compiler, (iv) source code for execution by an interpreter, (v) source code for compilation and execution by a just-in-time compiler, etc. As examples only, source code may be written using syntax from languages including C, C++, C#, Objective-C, Haskell, Go, SQL, R, Lisp, Java®, Fortran, Perl, Pascal, Curl, OCaml, Javascript®, HTML5, Ada, ASP (active server pages), PHP, Scala, Eiffel, Smalltalk, Erlang, Ruby, Flash®, Visual Basic®, Lua, and Python®. Further, at least one example embodiment relates to the non-transitory computer-readable storage medium including electronically readable control information (processor executable instructions) stored thereon, configured in such that when the storage medium is used in a controller of a device, at least one embodiment of the method may be carried out. The computer readable medium or storage medium may be a built-in medium installed inside a computer device main body or a removable medium arranged so that it can be separated from the computer device main body. The term computer-readable medium, as used herein, does not encompass transitory electrical or electromagnetic signals propagating through a medium (such as on a carrier wave); the term computer-readable medium is therefore considered tangible and non-transitory. Non-limiting examples of the non-transitory computer-readable medium include, but are not limited to, rewriteable non-volatile memory devices (including, for example flash memory devices, erasable programmable read-only memory devices, or a mask read-only memory devices); volatile memory devices (including, for example static random access memory devices or a dynamic random access memory devices); magnetic storage media (including, for example an analog or digital magnetic tape or a hard disk drive); and optical storage media (including, for example a CD, a DVD, or a Blu-ray Disc). Examples of the media with a built-in rewriteable non-volatile memory, include but are not limited to memory cards; and media with a built-in ROM, including but not limited to ROM cassettes; etc. Furthermore, various information regarding stored images, for example, property information, may be stored in any other form, or it may be provided in other ways. The term code, as used above, may include software, firmware, and/or microcode, and may refer to programs, routines, functions, classes, data structures, and/or objects. Shared processor hardware encompasses a single microprocessor that executes some or all code from multiple modules. Group processor hardware encompasses a microprocessor that, in combination with additional microprocessors, executes some or all code from one or more modules. References to multiple microprocessors encompass multiple microprocessors on discrete dies, multiple microprocessors on a single die, multiple cores of a single microprocessor, multiple threads of a single microprocessor, or a combination of the above. Shared memory hardware encompasses a single memory device that stores some or all code from multiple modules. Group memory hardware encompasses a memory device that, in combination with other memory devices, stores some or all code from one or more modules. The term memory hardware is a subset of the term computer-readable medium. The term computer-readable medium, as used herein, does not encompass transitory electrical or electromagnetic signals propagating through a medium (such as on a carrier wave); the term computer-readable medium is therefore considered tangible and non-transitory. Non-limiting examples of the non-transitory computer-readable medium include, but are not limited to, rewriteable non-volatile memory devices (including, for example flash memory devices, erasable programmable read-only memory devices, or a mask read-only memory devices); volatile memory devices (including, for example static random access memory devices or a dynamic random access memory devices); magnetic storage media (including, for example an analog or digital magnetic tape or a hard disk drive); and optical storage media (including, for example a CD, a DVD, or a Blu-ray Disc). Examples of the media with a built-in rewriteable non-volatile memory, include but are not limited to memory cards; and media with a built-in ROM, including but not limited to ROM cassettes; etc. Furthermore, various information regarding stored images, for example, property information, may be stored in any other form, or it may be provided in other ways. The apparatuses and methods described in this application may be partially or fully implemented by a special purpose computer created by configuring a general purpose computer to execute one or more particular functions embodied in computer programs. The functional blocks and flowchart elements described above serve as software specifications, which can be translated into the computer programs by the routine work of a skilled technician or programmer. Although described with reference to specific examples and drawings, modifications, additions and substitutions of example embodiments may be variously made according to the description by those of ordinary skill in the art. For example, the described techniques may be performed in an order different with that of the methods described, and/or components such as the described system, architecture, devices, circuit, and the like, may be connected or combined to be different from the above-described methods, or results may be appropriately achieved by other components or equivalents. Although the present invention has been disclosed in the form of embodiments and variations thereon, it will be understood that numerous additional modifications and variations could be made thereto without departing from the scope of the present invention.
44,075
11860258
DETAILED DESCRIPTION Referring toFIG.1A, a block diagram of an exemplary magnetic resonance imaging (MRI) system100is illustrated. The MRI system100illustrates an exemplary operating environment capable of implementing aspects of the disclosed technology in accordance with one or more examples described and illustrated herein. The MRI system100in this example includes a data acquisition and display computer (DADC)150coupled to an operator console110, an MRI real-time control sequencer152, and an MRI subsystem154. The MRI subsystem154may include a gradient subsystem168that includes X, Y, and Z magnetic gradient coils and associated amplifiers, a static Z-axis magnet169, a digital radio frequency (RF) transmitter162, a digital RF receiver160, a transmit/receive switch164, and RF coil(s)166(e.g., a whole-body RF coil). The static Z-axis magnet169can provide a biasing magnetic field and the RF coil(s)166and subject P are positioned within the field of the Z-axis magnet169. The RF coil(s)166can include a transmit coil, a receive coil, and/or a transceiver coil, for example. The RF coil(s)166are in communication with a processor (e.g., the control sequencer152and/or the processing unit202of the DADC150). In various examples, the RF coil(s)166both transmit and receives RF signals relative to subject P. The MRI subsystem154can also include an analog-to-digital converter (ADC), a digital-to-analog converter (DAC), an amplifier, a filter, and/or other modules configured to excite the RF coil(s)166and/or receive a signal from the RF coil(s)166. The MRI subsystem154may be controlled in real-time by the control sequencer152to generate magnetic and/or RF fields that stimulate magnetic resonance phenomena in a subject P to be imaged, for example to implement MRI sequences in accordance with various examples of the present disclosure. An image of an area of interest A of the subject P may be shown on display158coupled to or integral with the DADC150. The display158may be implemented through a variety of output interfaces, including a monitor, printer, and/or data storage device, for example. The area of interest A corresponds to a region associated with one or more structures or physiological activities in subject P in some examples. The area of interest shown in the example ofFIG.1Acorresponds to a chest region of subject P, but it should be appreciated that the area of interest for purposes of implementing various aspects of this technology is not limited to the chest area. It also should be appreciated that the area of interest may encompass various areas of subject P associated with various structural or physiological characteristics, such as, but not limited to the heart region, brain region, upper or lower extremities, or other organs or tissues. Referring toFIG.1B, another MRI system170is illustrated. The system170, or selected parts thereof, can be referred to as an MR scanner. Various embodiments as disclosed herein, or any other applicable embodiments as desired or required, can be implemented within the MRI system170. The MRI system170, in one example, has a magnet172. The magnet172can provide a biasing magnetic field. A coil174and subject176are positioned within the field of magnet172. The subject176can include a human body, an animal, a phantom, or other specimen. The coil174can include a transmit coil, a receive coil, a separate transmit coil and receive coil, or a transceiver coil. The coil174is in communication with a transmitter/receiver unit178and with a processor180. In various examples, the coil174both transmits and receives RF signals relative to subject176. The transmitter/receiver unit178can include a transmit/receive switch, an analog-to-digital converter (ADC), a digital-to-analog converter (DAC), an amplifier, a filter, or other modules configured to excite coil174and to receive a signal from the coil174. The processor180can include a digital signal processor, a microprocessor, a controller, or other module. The processor180, in one example, is configured to generate an excitation signal (for example, a pulse sequence) for the coil174. The processor180, in one example, is configured to perform a post-processing operation on the signal received from the coil174. The processor180is also coupled to storage182, display184and output unit186. The storage182can include a memory for storing data. The data can include image data as well as results of processing performed by the processor180. In one example, the storage182provides storage for executable instructions for use by the processor180. The instructions can be configured to generate and deliver a particular pulse sequence or to implement a particular algorithm, as described and illustrated in more detail below. The display184can include a screen, a monitor, or other device to render a visible image corresponding to the subject176. For example, the display184can be configured to display a radial projection, photographic or video depictions, two-dimensional images, or other view corresponding to subject176. The output unit186can include a printer, a storage device, a network interface or other device configured to receive processed data. The system170may include the MRI coil174for taking raw image data from the subject176, the processor180may be capable for performing any of the operations described herein, the output186may be capable for outputting the image, and the display184may be capable for displaying the image. The output186can include one or more of a printer, storage device and a transmission line for transmitting the image to a remote location. Code for performing the above operations can be supplied to the processor180on a non-transitory machine-readable medium or any suitable computer-readable storage medium. The machine-readable medium includes executable instructions stored thereon for performing any of the methods disclosed herein or as desired or required for aspects of the technology described and illustrated herein. Referring toFIG.2, a block diagram of the exemplary DADC150is disclosed. The DADC150is capable of implementing aspects of the disclosed technology in accordance with one or more examples described herein. The DADC150may be configured to perform one or more functions associated with examples described and illustrated herein with reference toFIGS.3-9. It should be appreciated that the DADC150may be implemented within a single computing device or a computing system formed with multiple connected computing devices. The DADC150may be configured to perform various distributed computing tasks, in which processing and/or storage resources may be distributed among the multiple devices. The DADC150in this particular example includes a processing unit202, a system memory204, and a system bus206that couples the system memory204to the processing unit202. The processing unit202can include a central processing unit (CPU), processor(s), and/or special purpose logic circuitry (e.g., a field programmable gate array (FPGA) and/or an application-specific integrated circuit (ASIC))). The system bus206may enable the processing unit202to read code and/or data to/from a mass storage device212or other computer-storage media212storing program modules214. The mass storage device212is connected to the processing unit202through a mass storage controller (not shown) connected to the system bus206. The mass storage device212and its associated computer-storage media provide non-volatile storage for the DADC150. Although the description of computer-storage media contained herein refers to a mass storage device, such as a hard disk or solid state drive, it should be appreciated by those skilled in the art that computer-storage media can be any available computer storage media that can be accessed by the DADC150. By way of example only, the mass storage device212may include volatile and/or non-volatile, removable and/or non-removable media implemented in any method or technology for storage of information such as computer-storage instructions, data structures, program modules, or other data. For example, computer storage media can include RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the DADC150. Accordingly, examples of this technology can be implemented in digital electronic circuitry, in computer hardware, in firmware, in software, or in any combination thereof. Examples can be implemented using a computer program product (e.g., a computer program, tangibly embodied in an information carrier or in a machine readable medium, for execution by, or to control the operation of, the processing unit202and/or the processor180). The computer program can be written in any programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a software module, subroutine, or other unit suitable for use in a computing environment. Thus, the examples of the technology described and illustrated herein may be embodied as one or more non-transitory computer or machine readable media, such as the mass storage device212, having machine or processor-executable instructions stored thereon for one or more aspects of the present technology, which when executed by processor(s), such as processing unit202and/or processor180, cause the processor(s) to carry out the steps necessary to implement the methods of this technology, as described and illustrated with the examples herein. In some examples, the executable instructions are configured to perform one or more steps of a method, such as one or more of the exemplary methods described and illustrated below with reference toFIGS.3-9, for example. According to various examples of this technology, the DADC150may operate in a networked environment using connections to other local or remote computers through a network216via a network interface unit210connected to the system bus206. The network interface unit210may facilitate connection of the computing device inputs and outputs to one or more suitable networks and/or connections such as a local area network (LAN), a wide area network (WAN), the Internet, a cellular network, a radio frequency (RF) network, a Bluetooth-enabled network, a Wi-Fi enabled network, a satellite-based network, or other wired and/or wireless networks for communication with external devices and/or systems. The DADC150may also include an input/output controller208for receiving and processing input from any of a number of input devices. Input devices may include one or more of keyboards, mice, stylus, touchscreens, microphones, audio capturing devices, and image/video capturing devices. An end user may utilize the input devices to interact with a user interface, for example a graphical user interface, for managing various functions performed by the DADC150. The program modules214may include instructions operable to perform tasks associated with examples illustrated in one or more ofFIGS.3-9. The program modules214may include an imaging application218for performing data acquisition and/or processing functions as described herein, for example to instruct the control sequencer152and/or acquire and/or process image data corresponding to magnetic resonance imaging an area of interest A. The DADC150can include a data store220for storing data that may include imaging-related data222such as acquired data from the implementation of MRI pulse sequences in accordance with various examples of the present disclosure. Referring back toFIG.1A, the operation of the MRI system100in some examples is controlled from the operator console110, which includes one or more processors coupled to memory (e.g., a non-transitory computer readable medium) via a system bus and configured to execute programmed instructions stored in the memory to carry out one or more steps of the technology disclosed herein. The operator console110can also include keyboard, a control panel, and/or a display. The operator console110communicates through a link with the DADC150to enable an operator to control the operation of the control sequencer152and production and display of images on the screen158. In other examples, the operator console110can communicate directly with the control sequencer152to control one or more aspects of the MRI subsystem154. Thus, in some examples, the DADC150receives commands from the operator console110that indicate the scan sequence and/or other parameters of the scan that is to be performed. The control sequencer152, which is also referred to as a pulse generator, then operates the MRI system100components to carry out the desired scan sequence. In some examples, the DADC150produces data that indicates the timing, amplitude, and shape of the RF pulses which are to be produced, and the timing and length of the data acquisition window, which is used to instruct the control sequencer152. The control sequencer152connects to the gradient amplifiers of the gradient subsystem168, to indicate the timing and shape of the gradient pulses to be produced during the MRI scan. The gradient waveforms produced by the control sequencer152are applied to the gradient amplifiers of the gradient subsystem168each of which excites a corresponding gradient coil in the gradient subsystem168to produce the magnetic field gradients used for spatially encoding acquired signals. The gradient subsystem168forms part of the MRI subsystem154, which includes a polarizing magnet (e.g., static Z axis magnet169) and a whole-body RF coil (e.g., RF coil(s)166) in some examples. A transceiver (e.g., RF transmitter162) produces pulses that are amplified by an RF amplifier and coupled to the RF coil(s)166by the transmit/receive switch164. The resulting signals radiated by the excited nuclei in the subject may be sensed by the same RF coil(s)166and coupled through the transmit/receive switch164to a preamplifier. The amplified signals are demodulated, filtered, and digitized in a receiver (e.g., RF receiver160). The transmit/receive switch164is controlled by a signal from the control sequencer152to electrically connect the RF amplifier to the RF coil(s)166during the transmit mode and to connect the preamplifier during the receive mode. In some examples, the transmit/receive switch164also enables a separate RF coil (e.g., a head coil or surface coil of the RF coil(s)166) to be used in either the transmit or receive mode. The signals observed up by the RF coil(s)166are digitized by the RF receiver160and transferred to the DADC150. When the scan is completed and an array of data has been acquired by the DADC150, the processing unit202of the DADC150operates to transform (e.g., Fourier transform) the data into the imaging data222via a reconstruction technique, as described and illustrated in more detail below. In response to commands received from the operator console110, this imaging data122may be archived on the mass storage device212or elsewhere, further processed by the processing unit202, conveyed to the operator console110for display, and/or presented on the display158. The display for operator console110and display158may be the same physical device. Referring toFIG.3, a flowchart illustrating an exemplary method for compensating for self-squared Maxwell gradient terms, and for quadratic Maxwell gradient cross-terms associated with added zero-moment waveforms, is illustrated. In step300in this example, the DADC150generates an original encoding gradient waveform (e.g., a spiral waveform). Spiral trajectories for the gradient waveform offer advantages for acquisition speed, SNR efficiency, and robust performance with motion as compared to Cartesian sampling. Referring toFIG.4, a pulse-sequence diagram of TSE imaging using spiral-rings encoding based on a spiral-in-out trajectory is illustrated. Although Maxwell gradient effects exist in Cartesian TSE imaging, spiral TSE imaging presents additional challenges because spiral waveforms generally vary along the spin-echo train, as opposed to Cartesian TSE imaging for which the same readout waveform is used for every echo. In particular, the spiral-ring waveforms for each echo along the spin-echo train in this example vary differently and are temporally asymmetric, thus differing from Cartesian TSE imaging and interleaved-spiral TSE imaging. At lower magnetic fields, the Maxwell term effects on images associated with spiral-rings can become substantial. While spiral trajectories (e.g., spiral-out or spiral-in-out) are used in the examples described and illustrated herein, other non-rectilinear trajectories can also be used in other examples. Accordingly, the technology described herein can be used with other trajectory types that are time-varying and asymmetric along the spin-echo train in TSE imaging. In step302, the DADC150modifies a portion of the original encoding gradient waveform generated in step300to generate a modified gradient waveform. In one example, the modified portion is a trapezoidal gradient segment and is at the end of the waveform corresponding to a particular echo. By lengthening or shortening, for example, the trapezoidal gradient segment of the original encoding gradient waveform not concurrent with data sampling and correspondingly decreasing or increasing, respectively, its gradient amplitude, the Maxwell integral can be decreased or increased, respectively, while the original zeroth gradient moment can be maintained. Referring toFIG.5, an exemplary waveform diagram of a spiral-in-out gradient waveform500before (top panel) and after (bottom panel) modification according to the exemplary method ofFIG.3to compensate for self-squared Maxwell gradient terms is illustrated. In this example, the trapezoidal gradient segment502of the spiral-in-out original encoding gradient waveform504is shortened in the modified gradient waveform500, while the zeroth moment is maintained, to result in the trapezoidal gradient segment506. Referring back toFIG.3, in step304, the DADC150adds one or more zero-moment waveform segments to the modified gradient waveform resulting from the modification introduced to the original encoding gradient waveform in step302. In one example, the added zero-moment waveform segments can be two bipolar gradient pairs, one at either end (i.e., one or both opposing ends) of the waveform resulting from step302, but other types of waveform segments can be added in other examples. Since gradient duration is a discrete variable, and changes in Maxwell integral achievable by modifying portions of the original encoding gradient waveform are limited, the change in Maxwell integral required to facilitate a desired substantially zero Maxwell integral at the spin-echo time, and a desired substantially equivalent Maxwell-integral magnitude at the beginning and end of the gradient waveform, may not be achieved by only modifying portion(s) (e.g., trapezoidal gradient segment) of the original encoding gradient waveform. Accordingly, in this particular example, adding bipolar gradient pairs permits the Maxwell integral to be modified to the extent necessary while the original zeroth moment is maintained. To generate the characteristics (e.g., shape and amplitude) of the bipolar gradient pairs in some examples, the DADC150determines, for each excitation or shot, the maximum Maxwell field integral M_max from self-squared terms for each gradient axis (e.g. gxand gy) from the spiral-ring or other original encoding gradient waveform with the highest gradient amplitude or maximum Maxwell integral, respectively. The DADC150then determines the amplitudes and durations of the bipolar gradient pairs based on the difference between the Maxwell field integral Mifor the current ith ring and Mmax. Additionally, in this example, the DADC150constrains the magnitude of the Maxwell integral at the beginning and end of each modified gradient waveform to be a constant value of Mmax/2. Referring back toFIG.5, a first bipolar gradient pair508is added at the beginning of the modified gradient waveform500and a second bipolar gradient pair510is added at the end of the modified gradient waveform500. The added first and second bipolar gradient pairs508and510, respectively, each have a zeroth moment equal to zero. As illustrated inFIG.5with respect to the original encoding gradient waveform504, without the Maxwell term compensation described and illustrated by way of the examples herein, the Maxwell integrals at times A and C (i.e., the beginning and end) are not equal in magnitude and the Maxwell integral at time B (i.e., the spin-echo time) is not zero. However, as illustrated with respect to the modified gradient waveform500, with the Maxwell term compensation described and illustrated by way of the examples herein, the Maxwell integral magnitude at times A and C are substantially equal and the Maxwell integral at time B is substantially zero. While only one gradient axis (i.e., gx) is illustrated inFIG.5, the corresponding waveform, modified as described with reference to steps302and304, can be implemented with respect to another gradient axis (e.g., gy) in the two-dimensional MRI of the examples described herein. Accordingly, by modifying the trapezoidal gradient segment502, as explained with reference to step302, and adding the bipolar gradient pairs508and510, as explained with reference to304, this technology advantageously reduces the phase shift from the Maxwell self-squared terms at each echo and reduces the difference in phase shifts among echo spacings. Referring toFIG.6, a set of TSE images using an interleaved spiral-out trajectory with and without compensation for self-squared Maxwell gradient terms according to the exemplary method ofFIG.3is illustrated. While a first uncompensated image600and a first compensated image602show no significant artifacts at slice position z=0 cm, the second uncompensated image604shows substantial artifacts at slice position z=12 cm. In the second compensated image606, no significant artifacts are seen at slice position z=12 cm. In this example, the waveforms used for the gxand gygradient axes were modified according to steps302-304ofFIG.3to compensate for the self-squared Maxwell gradient terms resulting in the relatively higher quality first and second compensated images602and606, respectively, that are generated during image reconstruction. Referring toFIGS.7A-B, exemplary simulation results showing the Maxwell phase accruals along the spin-echo train and during the spiral readouts are illustrated for TSE imaging using spiral-rings encoding based on a spiral-in-out trajectory. While adding the bipolar gradient pairs achieves an increase in the Maxwell integral while maintaining the original zeroth moment, and results in improved image quality over uncompensated images, the quadratic Maxwell gradient cross-terms from the added bipolar gradient pairs are relatively large and therefore negatively impact image quality. Referring more specifically toFIG.7A, in this particular example, the Maxwell phase pathway from the self-squared Maxwell gradient terms along the spin-echo train with (blue) and without (black) sequence modifications in the axial plane are illustrated. eRF and rRF denote excitation and refocusing RF pulses, respectively. The red circles denote the k-space center, while the orange arrows denote the effects of selected refocusing RF pulses, which alternate the sign of the phase error throughout the spin-echo train. After adding compensation gradients, the accrued phase for each echo spacing starts at −ϕ and ends at ϕ, where ϕ is a constant value, and the phase at the k-space center (and at all other spin-echoes) is zero. The green dashed boxes indicate examples of the result of increased Maxwell phase by added bipolar gradients (e.g., analogous to first bipolar gradient pair508and second bipolar gradient pair510). Referring more specifically toFIG.7B, the outer rings produce larger self-squared terms than the inner rings and the self-squared Maxwell gradient terms in this example are substantially larger than the quadratic Maxwell gradient cross-terms. Specifically, the first ring produces the largest Maxwell phase accrual while the central ring has the smallest value. For the sagittal scan example, the quadratic Maxwell gradient cross-term is relatively small compared to the self-squared Maxwell gradient term. Referring back toFIG.3, in step306, the DADC150optionally further modifies the gradient waveform generated in step304by reversing the polarity of one of the bipolar gradient pairs for the case when four bipolar gradient pairs are added in step304across the two gradient axes (e.g., gxand gy) within a given echo spacing. By setting the polarity of one of the four added bipolar gradient pairs of the gradient waveform associated with the echo spacing to be the opposite of the other three of the four added bipolar gradient pairs, a self-balancing of the quadratic Maxwell gradient cross-terms induced by the four added bipolar gradient pairs is achieved. Referring toFIG.8, an exemplary pulse-sequence diagram of TSE imaging using spiral-rings encoding based on a spiral-in-out trajectory and incorporating bipolar gradient pairs that compensate for self-squared Maxwell gradient terms and for quadratic Maxwell gradient cross-terms associated with added bipolar waveforms is illustrated. In this particular example, a first bipolar pair800and a second bipolar pair802are added to one of the echo spacings of a first imaging gradient804, which can be associated with one gradient axis (e.g., gx). Additionally, a third bipolar pair806and a fourth bipolar pair808are added to the same one of the echo spacings of a second imaging gradient810, which can be associated with a different gradient axis (e.g., gy). In this example, the polarity of the fourth bipolar pair808is reversed as compared to the polarity of the first, second, and third bipolar pairs800,802, and806, respectively. More specifically, the fourth bipolar gradient808has a negative lobe812before a positive lobe814whereas each of the first, second, and third bipolar pairs800,802, and806, respectively, has a positive lobe before a negative lobe. Referring back toFIG.3, in step308, the DADC150instructs the control sequencer152to excite the coil(s) of the gradient subsystem168according to the gradient waveform to generate a magnetic field gradient, optionally in two gradient axes (i.e., gxand gy). In some examples, the operator console110can be used to establish, or the DADC150can be configured to determine, additional time to be added to the echo spacing to achieve compensation as a result of the gradient waveform modifications of, or gradient waveform segments (e.g., bipolar gradients) added to, the original encoding gradient waveform generated in step300. In step310, the DADC150obtains and digitizes image data following detection of nuclear magnetic resonance (NMR) signals by RF coil (e.g., RF coil(s)166). The DADC150then processes the image data to generate image(s) of the subject, optionally applying corrections for Maxwell phase shift accrual during sampling and/or phase accrual during sampling due to off-resonance effects. In one particular example, the DADC150reconstructs the images by, for the axial orientation, demodulating imaging data from each interleave or ring by multiplying its own Maxwell phase shift by a factor of e−iϕc(z,t), where ϕc(z,t)=γ⁢z22⁢B0⁢∫0t(gx2(t′)+gy2(t′))⁢dt′.ϕc(z,t)=γ⁢z22⁢B0⁢∫0t(gx2(t′)+(gx2(t′)+gy2(t′))⁢dt′ For sagittal and coronal orientations, the DADC150in some examples demodulates the imaging data in an analogous manner as for the axial orientation and then performs multi-frequency interpolation (MFI) to mitigate the in-plane blurring caused by spatial and time dependent Maxwell term phase error. In one particular example for spiral-ring encoding described with respect to the sagittal orientation, the x2component γ⁢x22⁢B0⁢∫0t(gz28⁢B0)⁢dt′ is removed. Then, for each ring (1) a scaled Maxwell term time parameter tc(t) for each spiral ring trajectory is calculated tc(t)=1gm2⁢∫0tg02(t′)⁢dt′ and (2) a time-independent frequency offset fc(y,z) is given by fc(y,z)=γ⁢gm22⁢π4⁢B0⁢(y24+z2). The Maxwell term map of the sagittal plane can be generated based on this equation, and MFI deburring can then be applied to correct the offsets induced by Maxwell terms by partitioning the range of constant frequency offsets fc(y,z) into bins. For a general oblique orientation, the DADC150can calculate the Maxwell term map as a time-independent frequency offset fc(X,Y,Z) given by fc(X,Y,Z)=γ⁢gm22⁢π4⁢B0⁢(F1⁢X2+F2⁢Y2+F3⁢Z2+F4⁢YZ+F5⁢XZ+F6⁢XY), where the F1are constants calculated from the rotation matrix used to rotate from the logical to the physical coordinate system. Other methods for facilitating reconstruction-based correction to generate the TSE images can also be used in other examples. Referring toFIG.9, another set of TSE images, using spiral-rings encoding based on a spiral-in-out trajectory, with and without compensation (upper left), with compensation for self-squared Maxwell gradient terms (upper right), with additional compensation for quadratic Maxwell gradient cross-terms associated with added bipolar waveforms (lower left), and with additional compensation during reconstruction (lower right), is illustrated. In a first image900, no Maxwell compensation was applied during acquisition of the image data and first and second artifacts902and904, respectively, are present, among others. In a second image906, compensation was applied, as explained above with reference to steps302-304ofFIG.3, but the polarity reversal of one of the added bipolar gradient pairs described above with reference to step306ofFIG.3was not applied. In the second image906, though of higher quality than the first image900, third and fourth artifacts908and910, respectively, are present. In a third image912, compensation was applied as explained above with reference to steps302-306ofFIG.3, including the polarity reversal of one of the added bipolar gradient pairs, which resulted in improved image quality as compared to the second image906, although a minor fifth artifact914was present. In a fourth image916, compensation was applied as explained above with reference to steps302-306ofFIG.3and the fourth image914was reconstructed as explained with reference to step310ofFIG.3, resulting in a higher quality image as compared to the third image912; image914had no significant artifacts. As described and illustrated by way of the examples herein, the interleaved-spiral and spring-rings T2-weighted 2D-TSE pulse-sequence examples of this technology advantageously incorporate gradient waveform modifications to compensate the self-squared Maxwell gradient terms and, optionally, quadratic Maxwell gradient cross-terms associated with added zero-moment waveforms, at both the echoes and over echo spacings. This technology provides substantial improvement in image quality at relatively low magnetic field strength (e.g., 0.55 T and 1.5 T) for degradation associated with concomitant-gradient effects during TSE acquisitions. The sequence-based compensation of this technology also corrects for echo-by-echo phase variations, while maintaining the CPMG condition, and provides image reconstruction-based compensation that mitigates the residual Maxwell term-induced phase error along the readout window. It should be appreciated that any number or type of computer-based medical imaging systems or components, including various types of commercially available medical imaging systems and components, may be used to practice certain aspects of the disclosed technology. Systems as described herein with respect to example embodiments are not intended to be specifically limited to MRI implementations or the particular system shown inFIG.1A-B. Although examples of this technology are explained in some instances in detail herein, it is to be understood that other examples are contemplated. Accordingly, it is not intended that the present disclosure be limited in its scope to the details of construction and arrangement of components set forth in the foregoing description or illustrated in the drawings. The present disclosure is capable of other embodiments and of being practiced or carried out in various ways. It should be appreciated that any of the components or modules referred to with regards to any of the examples discussed herein, may be integrally or separately formed with one another. Further, redundant functions or structures of the components or modules may be implemented. Moreover, the various examples may be communicated locally and/or remotely with any user/clinician/patient or machine/system/computer/processor. Moreover, the various components may be in communication via wireless and/or hardwire or other desirable and available communication means, systems and hardware. Moreover, various components and modules may be substituted with other modules or components that provide similar functions. It should be appreciated that the devices and related components discussed herein may take on all shapes along the entire continual geometric spectrum of manipulation of x, y and z planes to provide and meet the anatomical, environmental, and/or structural demands and operational requirements. Moreover, locations and alignments of the various components may vary as desired or required. It should also be appreciated that various sizes, dimensions, contours, rigidity, shapes, flexibility and materials of any of the components or portions of components in the various embodiments discussed throughout may be varied and utilized as desired or required. Additionally, it should be appreciated that while some dimensions are provided on the aforementioned figures, any of the device may constitute various sizes, dimensions, contours, rigidity, shapes, flexibility and materials as it pertains to the components or portions of components of the device, and therefore may be varied and utilized as desired or required. In describing the examples herein, terminology will be resorted to for the sake of clarity. It is intended that each term contemplates its broadest meaning as understood by those skilled in the art and includes all technical equivalents that operate in a similar manner to accomplish a similar purpose. It is also to be understood that the mention of one or more steps of a method does not preclude the presence of additional method steps or intervening method steps between those steps expressly identified. Steps of a method may be performed in a different order than those described herein without departing from the scope of the present disclosure. Similarly, it is also to be understood that the mention of one or more components in a device or system does not preclude the presence of additional components or intervening components between those components expressly identified. As discussed herein, a “subject” may be any applicable human, animal, or other organism, living or dead, or other biological or molecular structure or chemical environment, and may relate to particular components of the subject, for instance specific tissues or fluids of a subject (e.g., human tissue in a particular area of the body of a living subject), which may be in a particular location of the subject, referred to herein as an “area of interest” or a “region of interest.” It should be appreciated that an animal may be a variety of any applicable type, including, but not limited thereto, mammal, veterinarian animal, livestock animal or pet type animal, etc. As an example, the animal may be a laboratory animal specifically selected to have certain characteristics similar to human (e.g. rat, dog, pig, monkey), etc. It should be appreciated that the subject may be any applicable human patient, for example. It must also be noted that, as used in the specification and the appended claims, the singular forms “a,” “an” and “the” include plural referents unless the context clearly dictates otherwise. Ranges may be expressed herein as from “about” or “approximately” one particular value and/or to “about” or “approximately” another particular value. When such a range is expressed, other exemplary embodiments include from the one particular value and/or to the other particular value. The term “about,” as used herein, means approximately, in the region of, roughly, or around. When the term “about” is used in conjunction with a numerical range, it modifies that range by extending the boundaries above and below the numerical values set forth. In general, the term “about” is used herein to modify a numerical value above and below the stated value by a variance of 10%. In one aspect, the term “about” means plus or minus 10% of the numerical value of the number with which it is being used. Therefore, about 50% means in the range of 45%-55%. Numerical ranges recited herein by endpoints include all numbers and fractions subsumed within that range (e.g. 1 to 5 includes 1, 1.5, 2, 2.75, 3, 3.90, 4, 4.24, and 5). Similarly, numerical ranges recited herein by endpoints include subranges subsumed within that range (e.g. 1 to 5 includes 1-1.5, 1.5-2, 2-2.75, 2.75-3, 3-3.90, 3.90-4, 4-4.24, 4.24-5, 2-5, 3-5, 1-4, and 2-4). It is also to be understood that all numbers and fractions thereof are presumed to be modified by the term “about.” Having thus described the basic concepts of the disclosed technology, it will be rather apparent to those skilled in the art that the foregoing detailed disclosure is intended to be presented by way of example only, and is not limiting. Various alterations, improvements, and modifications will occur and are intended to those skilled in the art, though not expressly stated herein. These alterations, improvements, and modifications are intended to be suggested hereby, and are within the spirit and scope of the invention. Additionally, the recited order of processing elements or sequences, or the use of numbers, letters, or other designations therefore, is not intended to limit the claimed processes to any order except as may be specified in the claims. Accordingly, the invention is limited only by the following claims and equivalents thereto.
38,566
11860259
DETAILED DESCRIPTION Specific embodiments of the invention are further described by combining the drawings and the technical solutions. The optical-electronic integrated RF leakage interference cancellation system for the CW radar consists of a microwave photonic link, a cable link, an electronic coupler and a feedback control unit. The microwave photonic link is composed of an electro-optic modulation unit, an optically enabled microwave phase shift unit, an optically enabled microwave time delay unit, an optically enabled microwave amplitude tuning unit, and a photo detection unit, which are connected in sequence by optical fibers or optical waveguides. The cable link connects an electronic circulator and a cable and an electronic coupler. The electronic coupler has two input ports and two output ports. The two input ports are connected with the microwave photonic link and the cable link, respectively. For the two output ports, the one port outputs the residual leakage signal after the cancellation between the microwave photonic link and the cable link, which is input to the feedback control unit; the other port outputs the target signal after the cancellation between the microwave photonic link and the cable link. The feedback control unit monitors the residual leakage signal from the electronic coupler and then generates the control signal of the phase adjustment, the time delay adjustment and the amplitude adjustment to the optically enabled microwave phase shift unit, the optically enabled microwave time delay unit and the optically enabled microwave amplitude tuning unit respectively, which composes the feedback control loop. Embodiment FIG.1is the structure diagram of the optical-electronic integrated RF leakage interference cancellation system. The low power target signal received by the transceiver antenna inputs to the cable link via the electronic circulator. Simultaneously, the high power RF leakage interference signal from the transmitter source of continuous wave via the electronic circulator also inputs to the cable link. The target signal and the RF leakage interference signal transmit through the cable to the input port of the electronic coupler. The tapped reference signal from the continuous wave source in the transmitter is input to the function integrated unit for the electro-optic modulation and the optically enabled microwave phase shifting. As shown inFIG.2, the function integrated unit for the electro-optic modulation and the optically enabled microwave phase shift consists of a laser, a unit for the generation of signal sideband and a unit for phase shifting of optical carrier. The frequency of the optical carrier from the laser is fcwith the wavelength of 1549.5 nm, and the spectrum diagram is shown inFIG.3. The optical carrier is split into two paths, one is input to the unit for the generation of signal sideband, and the other is input to the unit for phase shifting of optical carrier. The reference signal tapped from the CW signal source in the transmitter with the frequency of 14 GHz is input to the unit for the generation of signal sideband, is modulated on the optical carrier from the laser. The single sideband carrier-suppressed signal with the schematic spectrum diagram shown asFIG.4is generated and the measured spectrum is shown inFIG.5. FromFIG.5it can be seen that the optical carrier and the right sideband is suppressed, and the single left sideband signal is obtained by the unit for the generation of signal sideband. The added phase of φ is applied to the optical carrier by the unit for phase shifting of optical carrier, and the schematic diagram of output spectrum is shown inFIG.6. The single sideband signal and the phase shifted optical carrier are combined with the schematic diagram of output spectrum shown inFIG.7. By adjusting the added phase of optical carrier, the different phase shift of the microwave can be obtained, for example φ=180°. The optically enabled microwave time delay unit applies the time delay tuning on the optically carried microwave signal from the function integrated unit for the electro-optic modulation and the optically enabled microwave phase shifting. The optically enabled microwave amplitude tuning unit exerts the amplitude adjustment to the optically carried microwave signal from the optically enabled microwave time delay unit. The optically carried microwave signal with the phase shift, time delay and amplitude adjustment feeds to the photo detection unit, where the optical-to-electronic conversion is completed and then the cancellation signal is output. The cancellation signal from the optical-to-electronic conversion unit and the leakage interference are combined via the electronic coupler. When the phase shift is φ=180°, the phase of the cancellation signal and the leakage interference is opposite, and the cancellation occurs in the process of combining. The feedback control unit monitors the residual leakage signal from the electronic coupler and generates the control signal via the data processing and algorithm. The control signal adjusts the phase change, time delay change and amplitude change of the optically carried microwave signal in optical domain, and the photo detection unit generates the cancellation signal with the following relationship versus the leakage interference signal, the out of phase, the same amplitude and the matching time. The cancellation signal and the leakage interference signal into the electronic coupler with the conditions of the out of phase, the same amplitude and the matching time cancel with each other completely in the process of combining by the electronic coupler. The RF leakage signal is cancelled, and the target signal received by the transceiver antenna is recovered. The above contents are the further detailed description of the invention. The embodiments of the invention are not limited to the description. For those persons in the related technical field, it is possible to make some derivations and substitutions without departing from the spirit and scope of the invention. The derivations and substitutions should also be regarded as the protection scope of the invention.
6,216
11860260
DESCRIPTION OF THE INVENTION EMBODIMENTS The following description of the invention embodiments of the invention is not intended to limit the invention to these invention embodiments, but rather to enable any person skilled in the art to make and use this invention. 1. Overview As shown inFIGS.1A,1B, and1C, the system preferably includes a positioning engine and a corrections processing engine. The system can optionally include one or more GNSS receivers, reference stations, and/or any suitable components. The positioning engine can include one or more: observation module, outlier detector, carrier phase determination module, validation module, position module, velocity module, dead reckoning module, fast reconvergence module, and/or any suitable module(s). The corrections processing engine can include one or more: reference observation monitor, correction data monitor, metadata monitor, modeling engine, and/or any suitable module(s). As shown inFIG.7, the method can include: receiving reference station observations, determining corrections based on the reference station observations, receiving satellite observations, resolving carrier phase ambiguity based on the satellite observations and the corrections, and estimating a position of the GNSS receiver based on the carrier phase measurements. The method can optionally include: validating the corrections, detecting predetermined events, mitigating predetermined events, validating the integer ambiguities, removing the integer ambiguities from carrier phase measurements, operating an external system based on the estimated position, and/or any suitable steps. Embodiments of the system and/or method can be used, for example, in autonomous vehicle guidance (e.g., for unmanned aerial vehicles (UAVs), unmanned aerial systems (UAS), self-driving cars, agricultural equipment, robotics, rail transport/transit systems, etc.), GPS/GNSS research, surveying systems, and/or may be used for any suitable operation. In specific examples, the system (and/or components) can be coupled to any suitable external system such as a vehicle (e.g., UAV, UAS, car, truck, etc.), robot, railcar, user device (e.g., cell phone, mobile applications), agriculture, robotics, and/or any suitable system. Additionally, the GNSS receivers may be designed to utilize open-source software firmware, allowing them to be easily customized to particular demands of end user applications, easing system integration and reducing host system overhead; however, the GNSS receivers can be designed in any suitable manner. 1.1 GNSS Accuracy and Integrity Accuracy in a GNSS positioning system is a characteristic of the system that provides statistical information about possible error in the system's position data output. For example, a standard GNSS receiver might specify an accuracy at the ˜68% confidence level (i.e., 1 standard deviation for a normal distribution) as 1 meter; in other words, 68% of the time, the position output by the system is within 1 meter of true position. As another example, a high-accuracy real-time kinematic (RTK) GNSS receiver might provide an accuracy (at the 95% confidence interval or 2 standard deviations for a normal distribution) of 3 cm. Thus, a high-accuracy system is simply a system that achieves a low position error most of the time (as measured a posteriori). Like accuracy, integrity is also based on position error; however, integrity includes the concept of real- or near-real time error estimation (as opposed to a posteriori error calculation). Based on this real-time error estimation, positioning systems with integrity can provide alerts when positioning error likely exceeds error thresholds. At a broad level, a positioning system's integrity may be described using the following parameters: position error (PE), integrity risk, protection level (PL), alert limit (AL), and time to alert (TA). Real- or near-real time error estimation can occur within a predetermined estimation time (e.g., 100 ms, 1 s, 2 s, 3 s, 4 s, 5 s, 10 s, 20 s, 30 s, 45 s, 60 s, 90 s, 120 s, 180 s, 240 s, 300 s, 600 s, etc.), “fast enough to be used during navigation,” and/or with any suitable timing. Position error is the error in the estimated position (e.g., if an estimated position is 1 m away from the true position, the position error is 1 m). As previously noted, it is not possible for a GNSS receiver to know position error in real time independently (e.g., while a receiver may know that 95% of the time error is under 1 m thanks to accuracy characterization, the receiver cannot independently determine that a given position estimate is exactly 0.5 m away from true position—and if it could, it would simply subtract the error, resulting in perfect accuracy). Note that while error is discussed in terms of “position error”, integrity may be generally be performed with regard to any parameter estimated by the positioning system (e.g., error in horizontal position of a receiver, error in vertical position of a receiver, error in pseudorange(s) from a receiver to one or more satellites, error in velocity, etc.) Integrity risk is a characterization of integrity (as accuracy is a characterization of the system more generally). Integrity can be a measure of trust that can be placed in the correctness of the information supplied by a navigation system. Integrity risk is generally specified as the probability that position error will exceed some threshold (the alert limit) over some time period (e.g., at the current time, a second, a minute, an hour, a day, an operation duration, etc.). Target Integrity Risk (TIR) is an integrity risk goal used to generate protection levels (e.g., alert or mitigation thresholds). In some variants, the integrity risk can be separated into (e.g., determined from) constituent components: the probability that one or more predetermined events from a set of predetermined events occurs (e.g., during the time period); an intermediate data integrity risk (e.g., the probability that intermediate data, such as pseudorange; carrier phase; real-valued carrier phase; integer-valued carrier phase; etc., used to estimate the position will exceed a threshold over a time period, the probability that there will be an error in the intermediate data during a time period, the probability that the intermediate data will transform such as fix to an incorrect value, etc.); and/or any suitable probabilities. The estimated position integrity risk (and/or intermediate data integrity risk) can be the sum of one or more of the separate probabilities, the product of one or more of the separate probabilities, the maximum probability of the separate probabilities, the minimum probabilities, an average of one or more of the separate probabilities, based on an equation and/or model (e.g., determined empirically, based on fit parameters, based on Monte Carlo simulations, etc.) relating one or more of the separate probabilities to the integrity risk, and/or be otherwise determined from one or more of the separate probabilities. The integrity risk (e.g., estimated position integrity risk, intermediate data integrity risk) can be determined using weighted or unweighted separate probabilities. However, the integrity risk can be otherwise defined. Protection levels are a statistical upper bound to position error calculated based on the target integrity risk, and serve as the mechanism for real-time position error estimation. Alternatively stated, the protection level is an estimated position error assured (at any point in time) to meet a given TIR, or P{PE>PL}≤TIR. For a given position estimate, the protection level is calculated such that the probability of actual position error being larger than the protection level is less than the target integrity risk. Note that as a GNSS receiver receives more data and/or spends more time calculating, protection levels often decrease (i.e., when the receiver becomes more certain about position, the position error range that still meets TIR decreases). However, the protection levels can be otherwise defined. Alert limits are thresholds for protection levels. In an illustrative example, a position estimate may be considered unreliable when the protection level for the position estimate is above 10 m (in this case, 10 m is the alert limit). Relatedly, time to alert (TTA) is the maximum amount of time that may elapse between a protection level surpassing an alert limit and the generation of an alert (e.g., specifying that the position estimate is unreliable). However, the alert limit can be otherwise defined. While these parameters are the standard for describing integrity, it is worth noting that many of them may be specified in different ways. For example, position error estimates may be an upper bound of integrity risk at one or more set position errors. In general, the concept of integrity within satellite positioning involves estimating position error and responding accordingly. However, the integrity can be otherwise described. As previously mentioned, high-integrity positioning is important for applications in which GNSS error can result in high costs. One such application in which high-integrity positioning is important is in autonomous vehicle (AV) guidance. Unfortunately, traditional high-integrity GNSS systems (e.g., GNSS receivers that produce protection levels around 10 m) are greatly limited in utility for AV guidance—not only are these systems too costly for use in most AVs, but the alert limits required for AV guidance (e.g., 3 m) are substantially lower than those achievable by traditional system protection levels. Furthermore, traditional system protection levels for aircraft are calculated under open-sky conditions, unlike the busy and crowded urban environments AVs need to tackle, and therefore do not have to contend with pseudorange multipath errors that are compounded by commercial-grade receivers. Additionally, traditional systems' operation environments leverage simplifying assumptions (e.g., single fault assumptions, ignoring sub-meter threats) to speed up validation, which cannot be made in some of the technology's contemplated use cases. The systems and methods of the present disclosure are directed to novel high-integrity positioning that enables effective use of GNSS positioning for commercial applications. 1.2 Brief Overview of Traditional GNSS, PPP, and RTK As a quick refresher, traditional satellite positioning systems (e.g., standard GNSS) work by attempting to align a local copy (at a receiver) of a pseudorandom binary sequence with a satellite-transmitted copy of the same sequence; because the satellite is far from the receiver, the signal transmitted by the satellite is delayed. By delaying the local copy of the sequence to match up with the satellite-transmitted copy, the time it takes the signal to travel from the satellite to the receiver can be found, which can in turn be used to calculate the distance between the satellite and receiver. By performing this process for multiple satellites (typically four or more), a position of the receiver relative to the satellites can be found, which can in turn be used to find the position in a particular geographic coordinate system (e.g., latitude, longitude, and elevation). Typical GNSS systems can achieve at best 2m accuracy in positioning. For many applications (e.g., guidance for human-carrying autonomous vehicles/drones/agricultural equipment, GPS/GNSS research, surveying), this level of accuracy is woefully inadequate. In response, two position correction algorithms have been developed: precise point positioning (PPP) and real time kinematic (RTK). Instead of solely using the positioning code broadcast by satellites, PPP and RTK also make use of satellite signal carrier phase to determine position. While much higher accuracy is possible using carrier phase data, accurately determining position of a GNSS receiver (i.e., the receiver for which position is to be calculated) requires accounting for a number of potential sources of error. Further, carrier phase measurements are ambiguous; because the carrier signal is uniform, it may not be possible to differentiate between a phase shift of φ and 2πN+φ using phase measurements alone, where N is an integer. For example, it may be difficult to determine the difference between a phase shift of n radians and a phase shift of 3π radians (or −π, 5π, etc.). PPP attempts to solve this issue by explicitly modeling the error present in GNSS receiver phase and code measurements. Some errors are global or nearly global (e.g., satellite orbit and clock errors); for these errors, PPP typically uses correction data with highly accurate measurements. However, for local errors (i.e., error that is substantially dependent on GNSS receiver location), PPP is only capable of very rough modeling. Fortunately, many local errors change slowly in time; resultantly, PPP can achieve high accuracy with only a single receiver, but may require a long convergence time to precisely determine local errors. As the terms are used in the present application, “global error” refers to any error that does not vary substantially across multiple reference stations within a region, while “local error” refers to error that does vary substantially across multiple reference stations (because the error is specific to a reference station and/or because the error varies substantially over position within the region). As this error pertains to positioning, such errors may also be referred to as “global positioning error” and “local positioning error”. RTK avoids a large majority of the modeling present in PPP by use of GNSS reference stations (with precisely known locations); since a reference station is local to the GNSS receiver, differencing the reference station and GNSS receiver signals can result in greatly reduced error. The result is that RTK solutions can converge much more quickly than PPP solutions (and without the high accuracy global corrections data needed by PPP). However, RTK solutions require the presence of base stations near a GNSS receiver. 1.3 Benefits Variations of the technology can confer several benefits and/or advantages. First, variants of the technology can achieve high precision positioning for a GNSS receiver and/or external system. In specific examples, using carrier phase measurements and/or determining an integer ambiguity can increase the precision with which a GNSS receiver position can be determined (such as achieving centimeter or better positioning of a mobile receiver). In specific examples, this level of precision positioning can be achieved with commercial-grade GNSS receivers and antennas, which have adaptive tracking profiles that are a function of the receiver dynamics, and suffer from different levels of pseudorange error as compared to aviation-grade GNSS receivers and antennas. Second, variants of the technology can enable high accuracy GNSS receiver and/or external system position estimation. In related variants, the integrity of the estimated position can be (approximately) independent of pseudorange multipath errors. In a specific example, estimating the position using carrier phase ambiguities (e.g., integer-valued carrier phase ambiguities) with or without pseudorange measurements can enable the high accuracy, and/or reduce the dependence of the estimated position on the multipath errors. Third, variants of the technology can enable high integrity (e.g., low integrity risk, small protection levels, etc.) of the estimated GNSS receiver and/or external system position. In specific examples, the high integrity estimated position(s) can be enabled by distributing the predetermined event detection between the positioning engine (e.g., detecting threats with quick or immediate integrity impact) and the corrections processing engine (e.g., detecting threats with slow integrity impact, on the order of seconds or minutes), detecting sub-meter threats, having a plurality of validation levels for the integer-valued carrier phase ambiguities, using signals from multiple constellations within the threat model, using a first set of reference station observations to determine corrections and a second set of reference station observations to validate the corrections, and/or be otherwise enabled. Fourth, variants of the technology can estimate the GNSS receiver and/or external system position quickly. In specific example, the system and/or method can achieve a first TIR within 30 s (e.g., 5 s, 10 s, 15 s, 20 s, etc.), a second TIR within 90 s (e.g., of start-up, after achieving the first TIR, etc. such as within 10 s, 20 s, 30 s, 45 s, 60 s, 75 s, 90 s, etc.), and a third TIR within 300 s (e.g., of start-up, after achieving the first TIR, after achieving the second TIR, etc. such as 10 s, 20 s, 30 s, 45 s, 60 s, 75 s, 90 s, 120 s, 150 s, 180 s, 200 s, 215 s, 250 s, 270 s, 300 s, etc.). However, the technology can estimate and validate the GNSS receiver and/or external system position within any other suitable timeframe. Fifth, variants of the technology can enable receiver positioning to be determined to a threshold integrity in the absence of satellite signals (e.g., satellite signal detection interrupted due to hardware issue, obstructions, etc.). In a specific example, one or more sensors (e.g., inertial navigation systems (INS)) can be used to estimate the GNSS receiver and/or external system position when one or more satellite signals are not received. In this specific example, the dead reckoning position determined using different INSs can be validated against each other to ensure that the dead reckoning position meets the threshold integrity. However, variants of the technology can confer any other suitable benefits and/or advantages. 2. System As shown inFIGS.1A,1B, and C, the system1000includes a computing system1300. The computing system can include a positioning engine1100and a corrections processing engine1500. The system1000can optionally include one or more GNSS receivers1200, reference stations1600, sensors1700, and/or any suitable component(s). The system functions to estimate the position of a mobile receiver and/or external system. The estimated position preferably has a high accuracy, but can have any suitable accuracy. For example, the estimated position can have an accuracy (e.g., with 50% confidence, 68% confidence, 95% confidence, 99.7% confidence, etc.) of at most 10 meters (e.g., 1 mm, 5 mm, 1 cm, 3 cm, 5 cm, 10 cm, 20 cm, 30 cm, 50 cm, 60 cm, 75 cm, 1 m, 1.5 m, 2 m, 3 m, 5 m, 7.5 m, etc.). The estimated position preferably has a high integrity (e.g., a low target integrity risk, a small protection level, etc.), but can have any suitable integrity. For example, the estimated position (and/or velocity) can have a target integrity risk less than about 10−2/hour such as at most 10−3/hour, 10−4/hour, 10−5/hour, 10−6/hour, 10−7/hour, 10−8/hour, and/or 10−9/hour. In a second example, the estimated position can have a protection level less than about 10 meters, such as at most 5 m, 3 m, 2 m, 1 m, 75 cm, 50 cm, 40 cm, 30 cm, 25 cm, 20 cm, 10 cm, 5 cm, 3 cm, 1 cm, 5 mm, and/or 1 mm. The system can additionally or alternatively function to determine the probability and/or probability distribution that one or more predetermined events (e.g., feared events, threats, faults, etc.) will occur within a given time period. The predetermined events can correspond to events that will decrease the accuracy, integrity, availability of, and/or otherwise impact the estimated position. The predetermined events can directly impact the estimated position and/or indirectly impact the estimated position (e.g., by impacting the determination of real- or integer-valued carrier phase, the dead-reckoning position determination, corrections, outlier detection, reception of the satellite observations, reception of reference station observations, reception of corrections, etc.). In specific examples, the predetermined events can include: high dynamic events, low dynamic events, datalink threats, and/or other events. Examples of high dynamic events include: local predetermined events such as pseudorange multipath, carrier phase multipath, carrier phase cycle slip, RF interference, non-line of sight (NLOS) tracking, false acquisition, Galileo binary offset carrier modulation (BOC) second peak tracking, spoofing, etc.; satellite and/or satellite constellation predetermined events (e.g., satellite feared events) such as code carrier incoherency, satellite clock step error, satellite clock drift error greater than 1 cm/s, GPS evil waveform, loss of satellite observations, erroneous navigation messages, etc.; and/or other high dynamic events. Examples of low dynamic events include: environmental predetermined events, such as an ionospheric gradient of at most 1 cm/s, tropospheric gradient, ionospheric scintillations, atmospheric events, etc.; network predetermined events such as reference station pseudopath multipath, reference station RF interference, reference station cycle slip, reference station observation loss, reference station observation corruption, etc.; low dynamic satellite and/or satellite constellation predetermined events such as loss of satellite observations, erroneous navigation message(s), satellite clock drift error at most about 1 cm/s, issue of data anomaly, erroneous broadcast ephemeris, erroneous broadcast clock, constellation failure, etc.; metadata predetermined events such as incorrect reference station coordinates, incorrect earth rotation parameters, incorrect sun/moon ephemeris, incorrect ocean loading parameters, incorrect satellite attitude model, incorrect satellite phase center offset, incorrect satellite phase center variation, incorrect leap second, etc.; and/or other low-dynamic event. Examples of datalink threats can include: correction message corruption, corruption message loss, correction message spoofing, and/or other data transmission threats. In variants, high dynamic events can be events (e.g., threats, faults, etc.) that can impact the integrity of the estimated position as soon as the high dynamic events are processed and/or used in to estimate the position (e.g., on the order of seconds, milliseconds, nanoseconds, concurrently with satellite signal processing). In related variants, low dynamic events can be events (e.g., threats, faults, etc.) that can impact the integrity of the estimated position within a threat time period. The threat time period can be predetermined (e.g., 1 s, 5 s, 10 s, 12 s, 20 s, 30 s, 40 s, 50 s, 60 s, 90 s, 120 s, 180 s, 240 s, 300 s, 600 s, etc.), based on the event, based on the integrity (e.g., previous estimated position integrity, target integrity, application required integrity, etc.), based on the data transmission lag, and/or can any suitable time period. In related variants, datalink threats can be events based on transmitting and/or receiving data between computing systems. The probability of the predetermined events occurring (and/or the impact of the predetermined events) can be determined heuristically, empirically, based on computer simulations and/or models (e.g., Monte Carlo simulations, direct simulations, etc.), and/or otherwise be determined. The probability for each predetermined event is preferably independent of the probability of other predetermined events. However, the probability of two or more predetermined events can be dependent on each other. In variants, the probability of predetermined events can be used to estimate (and/or determine) the TIR (e.g., for the position estimate, for intermediate data, partially transformed data, etc.). In specific examples, the TIR can be determined from the product of each predetermined event, scaled by the probability of misdetecting the predetermined event and the impact of the predetermined event on the estimated position and/or intermediate data. However, the TIR can be determined in any suitable manner. The computing system1300preferably functions to process data from reference stations, GNSS receivers, and/or sensors. The computing system may process this data for multiple purposes, including aggregating data (e.g., tracking multiple mobile receivers, integrating satellite observations and sensor data, etc.), system control (e.g., providing directions to an external system based on position data determined from a GNSS receiver attached to the external system), position calculation (e.g., performing calculations for GNSS receivers that are offloaded due to limited memory or processing power, receiver position determination, etc.), correction calculation (e.g., local and/or global correction data such as to correct for clock errors, atmospheric corrections, etc.), detect predetermined events (e.g., in the satellite observations, in the reference station observations, in a datalink, etc.), mitigate the effect of the predetermined events (e.g., by removing the observations that include the predetermined events, by scaling the observations that include the predetermined events, etc.), and/or the computing system can process the data in any suitable manner. The computing system may additionally or alternatively manage reference stations or generate virtual reference stations for GNSS receiver(s) based on reference station observations. The computing system may additionally or alternatively serve as an internet gateway to GNSS receivers if GNSS receivers are not internet connected directly. The computing system can be local (e.g., internet-connected general-purpose computer, processor, to a GNSS receiver, to an external system, to a sensor, to a reference station, etc.), remote (e.g., central processing server, cloud, server, etc.), distributed (e.g., split between one or more local and remote systems), and/or configured in any suitable manner. In a preferred embodiment, the computing system is distributed between a local computing system and a remote computing system (e.g., server system). In a specific example, the local computing system can include a positioning engine and the remote computing system can include a corrections processing engine. However, the local computing system can include both the positioning engine and the corrections processing engine, the server can include both the position engine and the corrections processing engine, the server can include the positioning engine and the local computing system can include the corrections processing engine, and/or the positioning engine and/or corrections processing engine can be distributed between the server and the local computing system. However, the computing system can include any suitable components and/or modules. The positioning engine1100functions to estimate the position of the GNSS receiver1200and/or an external system coupled to the GNSS receiver. The positioning engine1110preferably takes as input satellite observations (e.g., observation data) from the GNSS receiver1200(or other GNSS data source) and corrections (e.g., corrections data) from the corrections processing engine1500to generate the estimated position (e.g., position data). However, the positioning engine can additionally or alternatively take sensor data, predetermined event information (e.g., detection of predetermined events, mitigation of predetermined events, probabilities of predetermined events, etc.), reference station observations, satellite observations from other GNSS receivers, and/or any data or information input(s). The positioning engine preferably outputs an estimated position and an integrity of the estimated position (e.g., a protection limit, an integrity risk, etc.). However, the positioning engine can additionally or alternatively output a dead reckoning position, sensor bias, predetermined event (e.g., detection, identity, mitigation, etc.), and/or any suitable data. The positioning engine is preferably communicably coupled to the GNSS receiver, the corrections processing engine, and the sensor(s), but can additionally or alternatively be communicably coupled to reference stations, and/or any suitable component(s). The position engine preferably performs an error detection on data (e.g., corrections, correction reliability, predetermined event detection, etc.) received from the corrections processing engine. The error detection is preferably based cyclic redundancy checks (CRC) (such as CRC-16, CRC-32, CRC-64, Adler-32, etc.). However, the error detection can be based on a secure hash algorithm (SHA), cryptographic hash functions, Hash-based message authentication code (HMAC), Fletcher's checksum, longitudinal parity check, sum complement, fuzzy checksums, a fingerprint function, a randomization function, and/or any suitable error detection scheme. In some variants, the corrections received from the corrections processing engine can be invalidated (and/or otherwise unavailable) after a time-out period has elapsed (and no new corrections have been received within the time-out period). The time-out period can be a predetermined duration (e.g., 1, 2, 5, 10, 20, 30, 40, 50, 60, 90, 120, 180, 300, 600, etc. seconds), a random or pseudorandom duration of time, based on the reliability of the corrections, based on a predicted change in the corrections, based on the GNSS receiver (e.g., position, velocity, acceleration, receiver bias, etc.), based on the external system (e.g., level of position integrity required, level of position accuracy required, position, velocity, acceleration, etc.), based on the application, based on the positioning engine (e.g., the ability of the positioning engine to accommodate inaccuracies in the corrections), and/or based on any suitable components. However, any suitable time-out period can be used. In a specific example, the positioning engine (and/or components of the positioning engine) can perform any suitable method and/or steps of the method as described in U.S. patent application Ser. No. 16/817,196 titled “SYSTEMS AND METHODS FOR REAL TIME KINEMATIC SATELLITE POSITIONING,” filed 12 Mar. 2020, which is incorporated herein in its entirety by this reference. As shown inFIG.2, the positioning engine1100includes one or more of: an observation module1110(e.g., observation monitor), carrier phase determination module1115, a fast reconvergence module1140, an outlier detector1150(e.g., a cycle slip detector), a position module1160(e.g., a fixed-integer position filter), a velocity module1170(e.g., a velocity filter), and a dead reckoning module1180. However, one or more modules can be integrated with each other and/or the positioning engine can include any suitable modules. Note that the interconnections as shown inFIG.2are intended as non-limiting examples, and the components of the positioning engine1100may be coupled in any manner. The observation module1110functions to take as input satellite observations (e.g., observations data) from the GNSS receiver(s)1200and check them for potential predetermined events and/or outliers (e.g., large errors in pseudorange and/or carrier phase). However, the observation module can additionally or alternatively check the reference station observations, the corrections, the sensor data, and/or any suitable data for predetermined events. The observation module is preferably configured to detect high dynamic predetermined events, but may be configured to detect datalink predetermined events, low dynamic predetermined events, and/or any predetermined events. Potential predetermined events may include any detected issues with observations—observations changing too rapidly, or exceeding thresholds, for example. The observation module can additionally or alternatively mitigate the effect of the predetermined events and/or transmit a mitigation (e.g., to be applied by an outlier detector, a carrier phase determination module, etc.) for the predetermined events, which functions to reduce the impact of the predetermined event occurrence on the estimated position (and/or any intermediate data in the estimation of the position). In a first specific example, when a predetermined event is detected among the satellite observations, the satellite observation with the predetermined event can be removed from the satellite observations. In a second specific example, when a predetermined event is detected among the satellite observations, the satellite observations can be scaled to decrease and/or remove the effect of the predetermined event. In a third specific example, when a predetermined event is detected among the satellite observations, additional satellite observations can be collected and/or transmitted to mitigate the effect of the predetermined events. In a fourth specific example, when a predetermined event is detected, the associated satellite observation(s) can be corrected (e.g., based on corrections determined from interpolation, secondary sensors, other constellations, etc.). However, any suitable mitigation strategy can be used to mitigate the effects of the predetermined events. The observation module is preferably communicably coupled to the velocity module and the carrier phase determination module, but can be communicably coupled to the outlier detector, the fast reconvergence module, the dead reckoning module, corrections processing engine, sensor(s), and/or any suitable module. In a specific example, the observation module1110provides pseudorange and carrier phase data to the carrier phase determination module (e.g., float position filter1120) and carrier phase data to the velocity module1170. The carrier phase determination module preferably functions to resolve the ambiguity in the carrier phase resulting from an uncertain number of wavelengths having passed before the satellite observations were received by the GNSS receiver. The carrier phase determination module is preferably communicably coupled to the observation module, the corrections processing engine, and the position module. However, the carrier phase determination module can be communicably coupled to the sensor(s), GNSS receiver, outlier detector, the dead reckoning module, the fast reconvergence module, and/or any suitable components. While the carrier phase determination module is preferably not communicably coupled to the velocity module, the carrier phase determination module can be communicably coupled to the velocity module. In a specific example, resolving the carrier phase ambiguity can include determining a real-value carrier phase ambiguity, determining an integer-valued carrier phase ambiguity, testing the integer-valued carrier phase ambiguity, and generating a fixed estimator. However, the carrier phase ambiguity can be resolved in any suitable manner. In variants, the carrier phase determination module can include a float filter1120(e.g., a float position filter) and an integer fixing module1130(e.g., an integer ambiguity resolver). However, the carrier phase determination module can include any suitable components. The float filter1120functions to generate a float solution to carrier phase ambiguity for each satellite (e.g., real-valued carrier phase ambiguities) to be used in position estimation. The inputs to the float filter can include corrections (e.g., corrections with a reliability greater than a threshold), satellite observations (e.g., pseudorange, carrier phase, predetermined event mitigated satellite observations, raw satellite observations, etc.), linear combinations of satellite observations, sensor data, and/or any suitable data. The output from the float filter is preferably a real-valued carrier phase ambiguity, but float filter can output the pseudorange or any suitable information. The float filter can determine the real-valued carrier phase ambiguities using a least squares parameter estimation, a recursive least squares parameter estimation, Kalman filter(s), extended Kalman filter(s), unscented Kalman filter(s), particle filter(s), and/or any suitable method for generating the real-valued carrier phase ambiguities. In a specific example, for a single satellite/receiver pair, the carrier phase measurement at the receiver can be modeled as follows: ϕ⁡(t)=1λ⁢(r-I+T)+f⁡(δ⁢tr-δ⁢ts)+N+ϵϕ where λ is the wavelength of the satellite signal, r is the range from the receiver to the satellite, I is the ionospheric advance, T is the tropospheric delay, f is the frequency of the satellite signal, δtris the receiver clock bias, δtsis the satellite clock bias, N is the integer carrier phase ambiguity, and ∈ϕis a noise term. The float filter1120preferably uses carrier phase data and pseudorange data from the GNSS receiver1200, along with corrections data from the corrections processing engine1500, to generate a float ambiguity value (i.e., a solution to the integer carrier phase ambiguity that is not constrained to an integer value). Corrections data is preferably used to reduce the presence of ionospheric, tropospheric, satellite clock bias terms, and/or other signals in carrier phase measurements (e.g., via differencing or any other technique). Additionally or alternatively, the float filter1120may generate float ambiguity values in any manner. The float filter1120may also generate position and velocity estimates from pseudorange and carrier phase data. In some cases, the float filter1120may refine real-valued carrier phase ambiguities (e.g., float ambiguity values) using inertial data (e.g., as supplied by a sensor). The integer fixing module1130(e.g., integer ambiguity resolver) functions to generate integer-valued carrier phase ambiguities from the real-valued carrier phase ambiguities. The integer fixing module1130may generate integer-valued carrier phase ambiguities in any manner (e.g., integer rounding, integer bootstrapping, integer least-squares, etc.). The input to the integer fixing module is preferably the real-valued carrier phase (e.g., from the float filter), but can include the pseudorange, the carrier phase(s), the corrections, the sensor data, previous estimated positions and/or integer-valued carrier phase ambiguities, and/or any suitable information. The output from the integer fixing module is preferably an integer-valued carrier phase ambiguity value, but can include any suitable information. In some variants, generating the integer-valued carrier phase ambiguities can include decorrelating the (real-valued) carrier phase ambiguities. The carrier phase ambiguities can be decorrelated using the LAMBDA method, the MLAMBDA method, LLL reduction algorithm, a whitening transformation, a coloring transformation, a decorrelation transformation, and/or any suitable decorrelation or reduction algorithm. The system can optionally include a validation module1135that functions to validate the integer-valued carrier phase ambiguities generated by the integer fixing module1130, which functions to determine whether the integer-valued carrier phase ambiguities should be accepted and/or have achieved a threshold quality. Alternatively, the integer-valued carrier phase ambiguities can be validated by the integer fixing module1130or any other suitable module. Integer-valued carrier phase ambiguity validation may be performed in any manner (e.g., the ratio test, the f-ratio test, the distance test, the projector test, etc.). In one variation, the validation module validates integer-valued carrier phase ambiguities in a multi-step process. Each step of the multi-step process preferably corresponds to increased confidence in the integer-valued carrier phase ambiguities (e.g., the integrity of the estimated position calculated using the validated integer-valued carrier phase ambiguities), but can additionally or alternatively be associated with different integrity levels, validation performance levels, and/or another integrity metric. Each step of the multi-step validation process can correspond to an amount of time required to validate (and/or determine) the integer-valued carrier phase ambiguity, an integrity (e.g., TIR, protection level, etc.) of the estimated position, a probability of the integer-valued carrier phase ambiguity being correct, and/or any suitable quality. Each step (and/or the number of steps) can depend on the satellite observations (e.g., number of satellite observations, number of satellite constellations, number of satellites corresponding to each satellite constellation, quality of the satellite observations, predetermined events in the satellite observations, etc.), the real-valued carrier phase ambiguity, the pseudo range, the external system, the application of the estimated position, the integrity (and/or target integrity) of the estimated position, the amount of time required to achieve a validation, the sensor (e.g., sensor type, sensor number, sensor data, etc.), and/or any suitable parameter. The number of steps (and/or steps to use) can be selected based on: the operation context (e.g., the integrity level for a given operation context), the available input data, the amount of available validation time, and/or otherwise determined. Alternatively, all steps can always be performed. Preceding steps are preferably always performed before succeeding steps, but one or more steps can be skipped or performed in a different order. The multi-step process can include at least three steps (e.g., 3 steps, 4 steps, 5 steps, 10 steps, etc.), but can additionally or alternatively include two steps and/or any suitable number of steps. In a first illustrative example, integer-valued carrier phase ambiguities that have not been validated to the first validation step can correspond to a low integrity (e.g., a non-safety of life integrity) estimated position such as TIR≥10−4/hour. In this illustrative example, integer-valued carrier phase ambiguities that have been validated to the first validation step, but not to the second validation level can correspond to an integrity risk of the estimated position ≤10−4/hour and a protection level of ≤2 m. In this illustrative example, integer-valued carrier phase ambiguities that have been validated to the second validation step, but not to the third validation step can correspond to an integrity risk of the estimated position 10−6/hour and a protection level of ≤2 m. In this illustrative example, integer-valued carrier phase ambiguities that have been validated to the third validation step can correspond to an integrity risk of the estimated position ≤10−7/hour and a protection level of ≤3 m. However, the integrity risk and/or protection level of the estimated position can be any suitable value (such as ≤10−2/hour, ≤10−3/hour, ≤10−4/hour, ≤10−5/hour, ≤106/hour, ≤10−7/hour, ≤10−8/hour, ≤10−9/hour, ≤10−2/hour, ≤10−3/hour, ≤10−4/hour, ≤10−5/hour, ≤10−6/hour, ≤10−7/hour, etc. and/or 0.1 m, 0.2 m, 0.5 m, 1 m, 2 m, 3 m, 5 m, 10 m, 20 m, 40 m, etc. respectively) for integer-valued carrier phase ambiguities at each validation step. The integrity risk and/or protection level associated with each step can be determined using simulations (e.g., Monte Carlo simulations), based on historical data (e.g., pattern matching), heuristics, neural networks, and/or otherwise determined. In a second illustrative example, integer-valued carrier phase ambiguities that have not been validated to the first validation step can be output immediately (e.g., within <1 s, <2 s, <5 s, <10 s, etc.) of start-up. In this illustrative example, integer-valued carrier phase ambiguities that have been validated to the first validation step, but not to the second validation level can be generated within 30 s of start-up of the positioning engine. In this illustrative example, integer-valued carrier phase ambiguities that have been validated to the second validation step, but not to the third validation step can be generated within 90 s of start-up of the positioning engine. In this illustrative example, integer-valued carrier phase ambiguities that have been validated to the third validation step can be generated within 300 s of start-up of the positioning engine. However, the integer-valued carrier phase ambiguities at each validation step can correspond to any suitable amount of time (e.g., relative to the start of positioning engine, relative to continuous operation of the positioning engine, relative to previous validation steps, etc.). In a third illustrative example, the validation module may validate, in the first step, integer ambiguities for at least two satellite constellations (e.g., GPS and Galileo, GPS and GLONASS, GPS and BDS, Galileo and GLONASS, Galileo and BDS, GLONASS and BDS, etc.) simultaneously. In the first step, the validation module preferably applies the same corrections to each of the satellite observations, but can apply regional-offset modified corrections, different corrections for each satellite (e.g., based on the satellite constellations), and/or any suitable corrections. In a second step, the validation module may validate (in parallel or sequentially) the satellite observations corresponding to a first satellite constellation independently of those corresponding to a second satellite constellation. If the calculated ambiguity corresponds to (e.g., matches) that of the first step, confidence is increased. Note that these validations for different satellite constellations may be performed using the same or different corrections (e.g., corrections to satellite observations a particular satellite constellation may be modified by a residual regional offset). Likewise, a third step of repeating one or more, such as two, additional consecutive validations for each constellation (e.g., satellite observations corresponding to consecutive time periods, consecutive epochs, the same time period, the same epoch, etc.) may further increase confidence in calculated integer ambiguity values. However, in the first, second, and/or third step, integer-valued carrier phase ambiguities corresponding to three or more satellite constellations, subsets of satellites within one or more satellite constellations (e.g., validating integer-valued carrier phase values for satellite observations for each satellite from a single satellite constellation, validating integer-valued carrier phase values for satellite observations for a first subset of satellites and a second subset of satellites corresponding to a single satellite constellation, validating integer-valued carrier phase values for satellite observations for each satellite from a plurality of satellite constellations, etc.), and/or any suitable satellite observations can be validated and/or validated any suitable number of times. In each step, when the validations fail (e.g., the ambiguities do not match the respective reference ambiguity), the underlying observations can be removed from the set of observations used to determine the position, the position determination can be restarted, certain external system functions (e.g., associated with the invalid step or associated integrity level) can be selectively deactivated, and/or other mitigation actions can be taken. In cases where multiple satellite constellations are used, the ability to generate independent corrections data may enable the integer fixing module to resolve two independent solutions for integer ambiguity, which in turn can increase the ability to provide high-integrity positioning. In specific examples, the validation module can validate satellite observations and/or subsets (e.g., one or more steps of a multistep validation) thereof by performing hypothesis testing (and related steps) as described in U.S. patent application Ser. No. 16/817,196 titled “SYSTEMS AND METHODS FOR REAL TIME KINEMATIC SATELLITE POSITIONING,” filed 12 Mar. 2020, which is incorporated herein in its entirety by this reference. However, a single-step validation process can be used. The integer-valued carrier phase ambiguities are preferably transmitted to the position module1160for position estimation after the integer-valued carrier phase ambiguities have been validated to in at least one validation step, but the integer-valued carrier phase ambiguities can transmitted to the position module1160for position estimation before or during the integer-valued carrier phase ambiguities validation, at any other suitable time, and/or to any other suitable endpoint. The fast reconvergence module1140functions to provide robustness in the event of short GNSS service interruptions based on inertial data (e.g., as provided by an inertial measurement unit (IMU)) and/or other non-GNSS sourced data (e.g., wheel odometer data, visual odometry data, image data, RADAR/LIDAR ranging data). For example, the fast reconvergence module1140may provide the carrier phase determination module1115with estimated carrier phase ambiguity (e.g., real-valued carrier phase ambiguity, integer-valued carrier phase ambiguity, etc.) based on previously estimated values and inertial/other data captured during a GNSS service interruption. In one implementation, the fast reconvergence module1140may, after resumption of valid GNSS messages, difference GNSS data and compare the difference to an estimate of change in position calculated using inertial and/or other data, and from this comparison, calculate a fast estimate of integer ambiguity change (which can be added to an older integer ambiguity estimate to produce an estimate of the current integer ambiguity). This estimate can then speed the process of re-establishing validated positioning data. However, the system can otherwise quickly reconverge on the carrier phase integer ambiguity after GNSS service resumption. The outlier detector1150functions to detect outliers, predetermined events (such as multipath error, cycle slips, etc.), and/or erroneous measurements within the data (e.g., satellite observations, sensor data, reference station observations, corrections, etc.). The outlier detector is preferably communicably coupled to the velocity module, the position module, the sensor(s), and the observation module, but can be communicably coupled to the corrections processing engine, the carrier phase determination filter, the fast reconvergence module, the dead reckoning module, and/or any suitable module. The inputs to the outlier detector can include: sensor data, satellite observations, corrections, previous estimated positions and/or velocities, reference station observations, and/or any suitable data or information. The outputs from the outlier detector can include: identification of predetermined events (e.g., identifying the presence of predetermined events, identifying the type of predetermined event, etc.), mitigation(s) for predetermined events, mitigated satellite observations (e.g., satellite observations that have been corrected to account for and/or remove the predetermined events, outliers, and/or erroneous measurements), and/or any suitable data or information. Mitigating the effect of the outlier(s) and/or predetermined event(s) can include removing one or more satellite observations from the set of satellite observations, weighting satellite observations with predetermined events differently from satellite observations without predetermined events, applying a correction to remove the predetermined event from the satellite observation, and/or any suitable steps. In variants, the outlier detector can perform the method and/or any steps of the method as disclosed in U.S. patent application Ser. No. 16/748,517 titled “SYSTEMS AND METHODS FOR REDUCED-OUTLIER SATELLITE POSITIONING” filed 21 Jan. 2020, which is herein incorporated in its entirety by this reference. However, the outlier detector can function in any suitable manner. In a specific example, the outlier detector can include a cycle slip detector1155, which functions to detect potential cycle slips in carrier phase observations (i.e., a discontinuity in a receiver's continuous phase lock on a satellite's signal). The cycle slip detector1155preferably detects cycle slips by examining linear combinations of carrier phase observations (e.g., by calculating phase measurement residuals and comparing those residuals to integer multiples of full phase cycles), but may additionally or alternatively utilize sensor data (e.g., inertial data) to detect cycle slips (e.g., when carrier phase ambiguity changes rapidly but inertial data does not show rapid movement, that may be indicative of a cycle slip). When a cycle slip is detected in a given observation, the cycle slip detector1155preferably discards that observation (making it unavailable for position calculation). Alternatively, the cycle slip detector1155may respond to cycle slip detections in any manner (e.g., weighting the observation less in position calculations, attempting to correct the cycle slip in the observation, etc.). The position module1160functions to calculate a position estimate for the GNSS receiver1200. The position estimate is preferably determined based on the carrier phase ambiguities with the ambiguities removed (e.g., integer-valued or real-valued), but can additionally or alternatively be determined based on sensor data (e.g., IMU data, INS data, etc.), real-valued carrier phase ambiguities, pseudorange, integer-valued carrier phase ambiguities (e.g., calculated by the integer fixing module1130), and/or any suitable data. In variants, this position estimate can have an accuracy dependent on the correction accuracy (e.g., of the corrections processing engine), such as less than 50 cm, 3 sigma, but can alternatively have any other suitable accuracy. This is possible because carrier phase measurements are far more accurate than pseudorange measurements (e.g., 100 times more accurate), and have a noise below centimeter level. The position module is can be communicably coupled to the sensor(s), dead reckoning module, outlier detector, carrier phase determination module, observation module, fast reconvergence module, corrections processing engine, and/or any suitable component. The position module is preferably not communicably coupled to the velocity module, but can be communicably coupled to the velocity module. The position module can additionally or alternatively calculate integrity (e.g., integrity risk, protection levels or a mathematically similar error estimate, etc.) for the estimated position. The position module1160preferably utilizes a modified form of the Advanced Receiver Advanced Integrity Monitoring (ARAIM) algorithm to perform protection level generation (and fault detection). ARAIM techniques are based on a weighted least-squares estimate performed in a single epoch. Traditionally, ARAIM techniques are performed using pseudorange calculations; however, the position module1160preferably utilizes a modified form of the ARAIM algorithm that takes only carrier phase measurements as input. The position module may, using this algorithm, mitigate predetermined events such as code carrier incoherency, satellite clock step error, and satellite clock drift. However, the position module can utilize Receiver Advanced Integrity Monitoring (RAIM) algorithm, Aircraft Advanced Integrity Monitoring (AAIM), Multiple solution Separation (MSS) algorithms, Relative Receiver Advanced Integrity Monitoring (RRAIM), Extended Receiver Advanced Integrity Monitoring (ERAIM), and/or any suitable algorithms to calculate the integrity of the estimated position and/or to mitigate the effect of predetermined events. In a specific example, the position module1160calculates the estimated position (and associated protection levels) based only on carrier phase observation data (not on pseudorange observation data), which limits the effect of certain predetermined events such as pseudorange multipath on the estimated position and/or the integrity of the estimated position. In variants, the position module can additionally or alternatively perform any suitable transformation (e.g., rotation, scaling, translation, reflection, projection, etc.) on the estimated position of the GNSS receiver to determine an estimated position of the external system. The optional velocity module1170functions to estimate a velocity of the GNSS receiver1200(and/or external system) and additionally to calculate an integrity (e.g., TIR, protection levels or a mathematically similar error estimate, etc.) for the estimated velocity. Alternatively, the vehicle velocity can be determined from the timeseries of positions output by the position module, determined from inertial sensor measurements, or otherwise determined. The velocity module1170preferably estimates the velocity using time differenced carrier phase measurements, but can use Doppler shift data, sensor data, pseudorange, differential estimated position (e.g., as estimated by the position module at two or more time points and/or epochs) and/or estimated velocity in any manner. The velocity module preferably receives the carrier phase data from the observation module, but can receive the carrier phase data (e.g., real-valued carrier phase ambiguities, integer-valued carrier phase ambiguities, etc.) from the carrier phase determination module, the fast reconvergence module, and/or from any suitable module. The velocity module is preferably communicably coupled to the observation module and the outlier detection module, but can additionally or alternatively be communicably coupled to the carrier phase determination module, the dead reckoning module, the position module, the corrections processing engine, and/or to any suitable module. The estimated velocity can be the relative velocity (e.g., between epochs), the instantaneous velocity, the average velocity, the instantaneous speed, the relative speed, the average speed, and/or any suitable velocity. In some variants, the estimated velocity can be used to estimate the track angle (e.g., the direction of motion of the GNSS receiver and/or external system). In a specific example, the track angle can be estimated based on the horizontal component of the estimated velocity using trigonometric relations. However, the track angle can be determined in any suitable manner. Like the position module1160, the velocity module1170preferably utilizes a modified form of the Advanced Receiver Advanced Integrity Monitoring (ARAIM) algorithm to perform estimated velocity integrity (e.g., TIR, protection level, etc.) generation (and fault detection). Traditionally, ARAIM techniques are performed using pseudorange calculations; however, the velocity module1170preferably utilizes a modified form of the ARAIM algorithm that takes only carrier phase measurements as input (e.g., similar to or different from the position module1160). The velocity module1170may, using this algorithm, mitigate predetermined events such as code carrier incoherency, satellite clock step error, satellite clock drift, jumps, accelerations, and/or other satellite feared events. However, the velocity module can utilize Receiver Advanced Integrity Monitoring (RAIM) algorithm, Aircraft Advanced Integrity Monitoring (AAIM), Multiple solution Separation (MSS) algorithms, Relative Receiver Advanced Integrity Monitoring (RRAIM), Extended Receiver Advanced Integrity Monitoring (ERAIM), and/or any suitable algorithms to calculate the integrity of the estimated velocity and/or to mitigate the effect of predetermined events. In some variants, the velocity module1170may include an ionospheric bias monitor; when ionospheric bias exceeding a threshold is detected for a given satellite, that satellite may be excluded from velocity solution generation. Alternatively, the velocity module1170can share the ionospheric bias monitor with the position module1160, with the corrections processing engine, and/or with any other suitable system. The ionospheric bias monitor of the velocity module1170may monitor ionospheric conditions in any manner; for example, the monitor may measure a change in ionospheric delay using a linear combination of dual frequency carrier phase measurements. For example, when the measured ionospheric delay in the signal from a satellite changes more rapidly than some threshold (or crosses some threshold), some measurements from that signal may be excluded. The threshold can be a predetermined threshold (e.g., greater than about 0.5%, 1%, 5%, 10%, 20%, 25%, 30%, 40%, 50%, 60%, 75%, 80%, 90%, 100%, etc. change over a time period such as a 1, 2, 5, 10, 20, 30, 50, 100, 1000, etc. epochs, 1 s, 5 s, 10 s, 20 s, 50 s, 100 s, 200 s, 300 s, 600 s, etc.), depend on the estimated velocity, depend on the satellite observations (e.g., satellite constellations, number of satellites, satellite frequencies and/or frequency combinations, etc.), depend on the application, depend on the external system, depend on the estimated position, depend on the integrity of the estimated velocity, and/or any suitable threshold. The dead reckoning module1180functions to provide dead reckoning navigation solutions when position and/or velocity data is unavailable (e.g., because a protection level exceeds an alert limit, one or more satellites observations are not available, a datalink threat has occurred, corrections data has timed out, etc.), when the position data falls below a predetermined integrity or confidence threshold, and/or otherwise used. The dead reckoning module1180may provide dead reckoning position and/or velocity as a replacement for or supplement to GNSS-estimated positions and/or velocities (e.g., estimated by the position module and/or velocity module) using sensor data (e.g., IMU data), the last known position (e.g., having a predetermined integrity level), and/or any suitable data (e.g., extrapolation based on prior estimated positions and velocities). In a specific example, the dead reckoning position (and/or velocity) can be used as the estimated position (and/or velocity) when the position module (and/or velocity module) are unable to converge and/or determine an estimated position (and/or velocity). The position module (and/or velocity module) may be unable to converge and/or determine an estimated position (and/or velocity), for instance, when the integer-valued carrier phase ambiguity is not validated (e.g., to a predetermined integrity level), when one or more satellite observations are not available (e.g., GNSS receiver outage, obstruction, etc.), when one or more reference station observations are unavailable, when predetermined events are detected, when the datalink times out, based on a user input, and/or at any suitable time. The dead reckoning module can be communicably coupled to the position module, the velocity module, the outlier detector, the fast reconvergence module, the observation module, the carrier phase determination module, the sensors, the corrections processing engine, and/or any suitable component. The dead reckoning position and/or velocity is preferably validated, but can be unvalidated. The dead reckoning position and/or velocity is preferably validated by comparing the dead reckoning position and/or velocity determined from two or more independent sensors. In a specific example, as shown inFIG.6A, the validated dead reckoning position and/or velocity is preferably the position and/or velocity that bounds the overlapping position and/or velocities between the dead reckoning position and/or velocity determined based on data from a first sensor and the dead reckoning position and/or velocity determined by a second sensor. In a second specific example, as shown inFIG.6B, the validated dead reckoning position and/or velocity is preferably the position and/or velocity that bounds the dead reckoning position and/or velocity determined based on data from a first sensor and the dead reckoning position and/or velocity determined by a second sensor. However, the validated dead reckoning position and/or velocity can be the overlapping region, a position and/or velocity that bounds all of the dead reckoning positions and/or velocities (e.g., surrounds, encompasses, matches the limits of, etc.) and/or any suitable position and/or velocity can be used. However, the dead reckoning position and/or velocity can be validated based on modelling, satellite observations (e.g., when a set or a subset of satellite observations are available the estimated position and/or velocity from those satellite observations can be used to validate the dead reckoning position and/or velocity), based on auxiliary sensors (e.g., LIDAR, RADAR, ultrasonic sensors, camera(s), etc.), based on communication with other systems (e.g., when other external systems are present), based on reference station observations, and/or in any suitable manner. The dead reckoning module can additionally or alternatively function to estimate and/or determine the bias of one or more sensors. The bias is preferably determined based on comparing the dead reckoning position and/or velocity to the estimated position and/or velocity determined based on the satellite observations (e.g., determined by the position module and/or the velocity module). However, the bias can be determined by calibration, modeling of the sensor(s), auxiliary sensors, and/or be otherwise determined. In variants, as shown inFIG.13, a dead reckoning module can include one or more: fusion module1183, dead reckoning monitor1185, and/or dead reckoning integrity monitor1187. However, a dead reckoning module can include any suitable module(s). The fusion module functions to estimate the receiver position (e.g., when GNSS signals are not available, when GNSS signals are available, etc.) and/or the integrity of the estimated position. Inputs to the fusion module can include: sensor data (e.g., accelerometer data, gyroscope data, validated sensor data, time-stamps, etc.), estimated GNSS position (e.g., the last available GNSS position, most recent GNSS position, etc.), estimated GNSS position integrity (e.g., protection level, TIR, etc.), estimated GNSS velocity (e.g., the last available GNSS velocity, most recent GNSS velocity, etc.), estimated GNSS velocity integrity (e.g., protection level, TIR, etc.), GNSS covariance matrices (e.g., GNSS position covariance, GNSS velocity covariance, etc.), and/or other inputs. Outputs from the fusion module can include: estimated fused position (e.g., absolute estimated position; relative estimated position such as relative to the last available GNSS position, last available position estimate, etc.; etc.), estimated fused velocity (e.g., absolute estimated velocity; relative estimated velocity such as relative to the last available GNSS velocity, last available velocity estimate, etc.; etc.), fused covariance (e.g., estimated position covariance, estimated velocity covariance, etc.), updated integrity (e.g., protection level, TIR, etc. estimated for the estimated fused position and/or estimated fused velocity), and/or other outputs. The fusion module can determine the outputs using one or more of: an alignment algorithm (e.g., strapdown inertial navigation system (SINS)), zero velocity update (ZUPT) algorithm, constant velocity update (CUPT), step-wise algorithm, zero-angular rate update (ZARU) algorithm, heuristic heading reduction, Earth Magnetic Yaw method, Kalman filter, extended Kalman filter, particle filters, and/or any algorithm(s). The dead reckoning module can include one fusion module, a plurality of fusion modules (e.g., one fusion module per sensor, more than one fusion module per sensor, less than one fusion module per sensor, etc.), and/or any suitable number of fusion modules. In variants including more than one fusion module, the fusion modules are preferably independent (e.g., operate on different data inputs, generate independent data outputs, use different algorithms, etc.), but can be dependent (e.g., operate on the same inputs, include a subset of inputs that are the same and a subset of inputs that are different, use the same algorithms, etc.). In a specific example, the dead reckoning module can include two fusion modules. The first fusion module can receive sensor data from a first subset of sensor data (e.g., associated with a first sensor) and an estimated position determined based on a first subset of satellite signals (e.g., associated with a first satellite constellation, associated with a specific subset of satellites, etc.). The second fusion module can receive a second subset of sensor data (e.g., from a second sensor that is independent of the first sensor, a distinct subset of sensor readings, etc.) and an estimated position determined based on a second subset of satellite signals (e.g., associated with a second satellite constellation, associated with a distinct subset of satellites, etc.). However, the fusion modules can use any suitable set or subsets of data. The dead reckoning monitor functions to detect predetermined events (e.g., faults) in the estimated fused data (e.g., estimated fused position, estimated fused velocity, estimated fused covariance, estimated fused integrity, etc. such as from the fusion module(s)). The dead reckoning monitor preferably receives estimated fused data from at least two fusion modules (e.g., two sets of independent estimated fused data), but can additionally or alternatively receive estimated fused data from a single fusion module (e.g., at one or more time points), estimated GNSS position (e.g., last available GNSS position, historic GNSS position, unvalidated GNSS position, etc.), estimated GNSS velocity (e.g., last available GNSS velocity, historic GNSS velocity, unvalidated GNSS velocity, etc.), GNSS integrity (e.g., last available GNSS position and/or velocity integrity, historic GNSS integrity, etc.), and/or any suitable inputs. The dead reckoning monitor can transmit (e.g., output) one or more flags (e.g., use or don't use, safe or not safe, etc.) relating to the state of the dead reckoning position, but can transmit an achievable integrity (e.g., of the estimated dead reckoning position, of the estimated dead reckoning velocity, etc.), an error (e.g., standard deviation, variance, etc.), a confidence interval, and/or any suitable output. The outputs can be generated using an interacting multiple model (IMM) filter, particle filters, extended Kalman filters, comparing one or more inputs to a threshold, and/or using any suitable technique. However, the dead reckoning monitor can additionally or alternatively identify predetermined events, mitigate an effect of the predetermined events, and/or perform any suitable steps. In a specific example, the dead reckoning monitor can compare a first set of estimated fused data to a second set of estimated fused data. When the overlap (e.g., position overlap, velocity overlap, etc.) between the two sets of fused data is greater than or equal to a threshold, the dead reckoning monitor can output a use flag. When the overlap between the two sets is less than (or equal to) a threshold, the dead reckoning filter can output a don't use flag. However, the dead reckoning monitor can generate outputs in any manner. The dead reckoning integrity monitor functions to determine (e.g., estimate, calculate, etc.) the integrity of the dead reckoning position (and/or velocity). The dead reckoning integrity monitor can receive: estimated fused data (e.g., from a single fusion module, from a plurality of fusion modules, etc.), estimated GNSS position (e.g., last available GNSS position, historic GNSS position, unvalidated GNSS position, etc.), estimated GNSS velocity (e.g., last available GNSS velocity, historic GNSS velocity, unvalidated GNSS velocity, etc.), GNSS integrity (e.g., last available GNSS position and/or velocity integrity, historic GNSS integrity, etc.), one or more outputs from a dead reckoning monitor (e.g., flags, uncertainty, achievable integrity, etc.), and/or any suitable input. The dead reckoning integrity monitor preferably determines (e.g., based on the input(s)) an integrity of the dead reckoning position and/or velocity (e.g., horizontal protection level, vertical protection level, TIR, alert limit, etc.), but can determine any suitable output. The integrity of the dead reckoning position can be determined based on the overlap between the dead reckoning position and/or velocity (e.g., estimated fused position, estimated fused velocity, etc.); the estimated fused data (e.g., the estimated fused data with higher integrity, the estimated fused data with lower integrity, etc.); for example as described for “positioning platform definition for train control and ETC” and/or “monitoring the integrity of the hybridized solutions” as described in Philippe Brocard, “Integrity monitoring for mobile users in urban environment,” Signal and Image processing, INP DE TOULOUSE, 2016, English incorporated herein in its entirety by this reference; as shown for example inFIGS.6A and6B; and/or in any suitable manner. The GNSS receiver1200functions to receive a set of satellite observations corresponding to signals transmitted from one or more positioning satellites (preferably at least 4, but alternatively any number). The satellites preferably correspond to at least two satellite constellations (e.g., GPS, BDS, GLONASS, Galileo), but can correspond to one satellite constellation. These satellite observations (e.g., pseudorange, carrier phase, Doppler measurements, C/Mo measurements, etc. for each satellite) are preferably processed by the positioning engine1100to obtain an estimated position of the receiver1200(as described in previous sections). The GNSS receiver1200may additionally or alternatively transmit data to the corrections processing engine to be used in corrections generation. The GNSS receiver1200is preferably coupled to an antenna made of a conductive material (e.g., metal). Antennas may additionally or alternatively include dielectric materials to modify the properties of the antennas or to provide mechanical support. The antennas may be of a variety of antenna types; for example, patch antennas (including rectangular and planar inverted F), reflector antennas, wire antennas (including dipole antennas), bow-tie antennas, aperture antennas, loop-inductor antennas, and fractal antennas. The antennas can additionally include one or more type of antennas, and the types of antennas can include any suitable variations. The antenna structure may be static or dynamic (e.g., a wire antenna that includes multiple sections that may be electrically connected or isolated depending on the state of the antenna). Antennas may have isotropic or anisotropic radiation patterns (i.e., the antennas may be directional). If antennas are directional, their radiation pattern may be dynamically alterable; for example, an antenna substantially emitting radiation in one direction may be rotated so as to change the direction of radiation. If the GNSS receiver1200couples to multiple antennas, an antenna coupler may split power between them using a splitter; additionally or alternatively, the antenna coupler may include a switch to select between the multiple antennas, or the antenna coupler may couple to the antennas in any suitable manner. The GNSS receiver1200may include a front-end module that converts signals received at the antenna digital baseband signals for processing. The front-end module preferably includes an analog-to-digital converter (e.g., the Maxim MAX2769) capable of operating at high sample rates. The front-end module is preferably capable of receiving L1 GPS, GLONASS, Galileo, and SBAS signal bands. The front-end module may additionally or alternatively be capable of receiving additional bands (e.g., L2 GPS) or the receiver1200may include multiple front-end modules for different bands. The GNSS receiver1200may additionally include a satellite signal management module that functions to perform satellite signal tracking and acquisition. The satellite signal management module may additionally or alternatively include programmable digital notch filters for performing continuous wave noise nulling. The satellite signal management module preferably includes flexible and fully programmable correlators that may be used by a microcontroller to implement tracking loops and acquisition algorithms. In a specific example, a GNSS receiver can measure satellite observations for at least three satellite constellations (e.g., GPS, Galileo, BDS, GLONASS, etc.). However, the GNSS receiver can measure satellite observations for one or two satellite constellations. In this specific example, the GNSS receiver preferably receives the satellite observations for at least 3 satellite constellations simultaneously, but can receive the satellite observations sequentially and/or in any order. In this specific example, the GNSS receiver preferably receives at least two frequencies (e.g., L1, L2, L5, E1, E5a, E5b, E6, etc.) for one or more of the satellite constellations, but can receive a single frequency for each satellite constellation. In this specific example, the GNSS receiver is preferably able to track at least 24 satellites, but can track any number of satellites. In this specific example, the GNSS receiver preferably detects (and identifies) unresolved pseudorange code ambiguities. In this specific example, the GNSS receiver preferably detects (and identifies) unresolved half-cycled carrier phase ambiguities. In this specific example, the GNSS receiver preferably detects (and identify) RF interference exceeding a threshold (e.g., 1 dB, 5 dB, 10 dB, 30 dB, 50 dB,) in frequency bands of interest. In this specific example, the GNSS receiver preferably detects (and identifies) spoofing attempts (e.g., for the satellite observations, for the corrections, etc.). In this specific example, the probability of unidentified cycle slip in the GNSS receiver is preferably at most about 10−1per hour (e.g., in open sky conditions). In this specific example, the probability of a GNSS receiver pseudorange measurement error greater than 10 m is less than 1/hour/satellite (e.g., in open sky environment). Each satellite observation is preferably independent (e.g., of satellite observations corresponding to different satellites). For instance, the failure to observe a satellite observation from a given satellite should not trigger a predetermined event in another satellite. In this specific example, the GNSS receiver reacquisition time for carrier phase after a GNSS signal outage is preferably at most 6 seconds for the GPS L1 C/A signal. In this specific example, the GNSS receiver reacquisition time for carrier phase after a GNSS signal outage is preferably at most 2 seconds for the GPS L2C signal. In this specific example, the GNSS receiver reacquisition time for carrier phase after a GNSS signal outage is preferably at most 2 seconds for the Galileo E1 signal. In this specific example, the GNSS receiver reacquisition time for carrier phase after a GNSS signal outage is preferably at most 2 seconds for the Galileo E5b signal. In this specific example, the GNSS receiver reacquisition time of pseudorange after a GNSS signal outage is preferably at most 1 second for the GPS L1 C/A signal. In this specific example, the GNSS receiver reacquisition time of pseudorange after a GNSS signal outage is preferably at most 1 seconds for the GPS L2C signal. In this specific example, the GNSS receiver reacquisition time of pseudorange after a GNSS signal outage is preferably at most 1 second for the Galileo E1 signal. In this specific example, the GNSS receiver reacquisition time of pseudorange after a GNSS signal outage is preferably at most 1 second for the Galileo E5b signal. In this specific example, the GNSS receiver preferably measures carrier phase with at most 1-sigma carrier phase measurement noise of 0.005 cycles (e.g., in open sky conditions). However, the GNSS receiver can meet any specifications and/or measure any suitable satellite observations. The corrections processing engine1500functions to generate corrections (e.g., correction data) to be used by the positioning engine1100(and/or the GNSS receiver1200). The corrections are preferably used to improve the accuracy and/or integrity of the estimated position and/or velocity. The corrections may take the form of PPP corrections, RTK corrections, Satellite-Based Augmentation System (SBAS) corrections, or any other type of corrections. The corrections can be used to correct the satellite observations (e.g., as measured by the GNSS receiver), to facilitate carrier phase determination (e.g., by the carrier phase determination module), to facilitate detection of outliers (e.g., at an outlier detector), to facilitate determination of predetermined events, and/or in any suitable manner. In a specific example, the corrections processing engine (and/or components of the corrections processing engine) can include the system and/or components thereof and/or preform the method and/or steps thereof as described in U.S. patent application Ser. No. 16/589,932 titled “SYSTEMS AND METHODS FOR DISTRIBUTED DENSE NETWORK PROCESSING OF SATELLITE POSITIONING DATA,” filed 1 Oct. 2019, which is incorporated herein in its entirety by this reference. The corrections processing engine can additionally or alternatively function to determine a reliability of the corrections. The reliability preferably ensures that the corrections can enable (e.g., enable the positioning engine) a high integrity estimated position and/or velocity to be determined. However, the reliability can be used in any suitable manner. The reliability is preferably determined based on a distinct (e.g., non overlapping, nonidentical, etc.) set of data sources (e.g., reference stations such as reliability reference stations) from data sources used to generate the corrections (e.g., reference stations such as corrections reference stations). However, the reliability and the corrections can be generated based on the same and/or any suitable data sources. The reliability can be a flag (e.g., use or don't use), an achievable integrity (e.g., of the estimated position, of the estimated velocity, of the real-valued carrier phase, of the integer-valued carrier phase, etc.), an error (e.g., standard deviation, variance, etc.), a confidence interval, and/or any suitable form. The corrections processing engine is preferably communicably coupled to the positioning engine and to the reference station(s), but can be communicably coupled to the GNSS receiver, the external system, the sensor(s), and/or to any suitable component. In one implementation of an invention embodiment, the corrections processing engine1500—rather than attempting to generate corrections solely from a small set of high-quality global reference stations (as in PPP) or by solely comparing data in GNSS receiver/reference station pairs (as in RTK)—collects data from reference stations1600(and/or other reference sources), and instead of (or in addition to) applying this data directly to generate corrections, uses the data to generate one or more corrections models (which can be used to generate corrections data in a form utilizable by the positioning engine1100). By operating in this manner, the corrections processing engine1500may provide corrections (e.g., a set of corrections) that suffers from little of PPP's long convergence time issues, with solution complexity scaling directly with the number of reference stations N (unlike RTK, in which solution complexity scales at least with the number of possible pairs; i.e., O(N2). In fact, many current solutions scale with O(N3) or worse). Further, in some embodiments, the corrections processing engine1500enables spatial interpolation and/or caching of the corrections to be performed more generally than with traditional RTK. Virtual reference stations (also referred to as pseudo reference stations) typically involve the interpolation of RTK corrections data in real time (and, as discussed before, error correction scales in complexity with at least O(N2)). In contrast, interpolation in the corrections processing engine1500can be limited to specific aspects of global and/or local corrections models, providing more robustness to error and/or predetermined events and better insight as to error and/or predetermined event causes. In specific examples, specific aspects can include regions (e.g., specific localities), specific satellite observation data (e.g., pseudorange, carrier phase, etc.), specific satellite constellations, atmospheric models, and/or any suitable aspects of the global corrections. Further, unlike RTK, which requires real-time corrections data, the corrections processing engine1500may cache or otherwise retain model parameters even when data is limited (e.g., when a reference station becomes unavailable). However, the corrections processing engine may operate in a similar manner to RTK corrections processing engine (e.g., performing real-time interpolation between reference stations, generating real-time corrections data, etc.). Note that the corrections processing engine1500may additionally or alternatively generate corrections data based on augmented satellite systems (e.g., WAAS, EGNOS, SDCM, MSAS, QZSS-SBAS, GAGAN, BDABAS, etc.), auxiliary sensors, network information, almanac information, and/or in any manner. In one implementation, as shown inFIG.3, the corrections processing engine1500includes at least one of a reference station observation monitor1510(reference observation monitor), correction data monitor1512, a modeling engine1520, and a reliability engine1530(e.g., an integrity engine). Note that the interconnections as shown inFIG.3are intended as non-limiting examples, and the components of the corrections processing engine1500may be coupled in any manner and/or the corrections processing engine may include any suitable component(s). The reference station observation monitor1510functions to check reference station observations from reference stations (and/or other reference sources) for potential predetermined events. For example, reference station observation monitor(s)1510may function to detect reference station multipath errors (e.g., pseudorange error greater than about 10 cm, 20 cm, 50 cm, 1 m, 2 m, 5 m, 10 m, 20 m, 50 m, etc. from a reference station1600of the set of reference stations), reference station interference errors, reference station cycle slip errors, reference station observation data corruption, and/or any other predetermined event related to reference station data. The reference stations observation monitor is preferably communicably coupled to the modeling engine and/or the reliability engine, but can additionally or alternatively be coupled to the correction data monitor, the positioning engine, and/or any suitable component. The reference stations observation monitor can additionally or alternatively mitigate the effect of predetermined events in the reference station observations. Mitigating the effect of predetermined events in the reference station observations can include: removing reference station observations (and/or associated data) associated with the predetermined event, scaling reference station observations (e.g., based on the predetermined events), correcting the predetermined event, and/or any suitable mitigation step(s). The reference station observation monitor(s) preferably take, as input, reference station observations (e.g., pseudorange, carrier phase, etc.) from reference stations1600, but may additionally or alternatively take as input any data from reference stations1600, sensor(s), satellite(s), networks, databases, GNSS receivers, and/or other reference sources. The location for each reference station is preferably known to the corrections processing engine, but the location can be provided to and/or unknown to the corrections processing engine. In a specific example, the corrections processing engine can include two reference station observation monitors1510and1511. The reference station observation monitor1510functions to perform observation monitoring on reference station data that will be passed to the modeling engine1520to generate corrections. In contrast, the reference station observation monitor1511performs observation monitoring on reference station data used by the reliability engine1530(to validate corrections generated by the modeling engine1520). The reference station observation monitors1510and1511preferably function identically, but additionally or alternatively may function differently (e.g., different thresholds for predetermined events, different predetermined event monitoring, etc.). The reference station observation monitors1510and1511may use any set(s) of reference sources. For example, the reference station observation monitor1510and1511may use non-overlapping sets of reference stations1600as data sources (so the corrections generated by the modeling engine1520do not depend upon the reference stations used to perform reliability checks). In a second example, the reference station observation monitor1511may use an independent subset of the set of reference stations1600used by the reference station observation monitor1510. However, the reference stations1600referenced by the reference station observation monitors1511and1510can be: overlapping, a subset of the other, independent sets, and/or otherwise related. Similarly, these reference sources may receive satellite information from any set(s) of satellites. However, the two reference station observation monitors may receive the reference station observations from the same reference stations, overlapping reference stations (e.g., a subset of reference stations in common), and/or any suitable reference stations. However, a single reference station observation monitor can be used to monitor one or more sets of reference station observations, one or more sets of reference station observations can be used without monitoring, and/or any suitable reference station observation monitoring can be performed. The reference station observation monitor can operate in the same or a different manner from the observation monitor1110. The correction data monitor1512functions to check inputs (e.g., global data) regarding one or more satellites (e.g., inputs that are independent of reference stations) for potential predetermined events. For example, the correction data monitor may take as input global clock data, satellite orbit data, satellite code biases, satellite phase biases, and/or other data not tied to a specific reference station1600. The global data can be determined from satellite ephemeris, network information, database information, almanac information, satellite tracking, reference stations, GNSS receivers, and/or any suitable source. Example predetermined events detectable by the correction data monitor1512include satellite orbit errors, satellite clock errors, and/or any suitable predetermined events. The correction data monitor1512preferably provides such global corrections data to the modeling engine1520for corrections generation. The corrections data monitor can mitigate the effect of predetermined events in the global data. For example, the corrections data monitor can discard reference station observations corresponding to one or more satellite (e.g., satellites associated with predetermined events), can instruct the modelling engine to discard one or more reference station observations, can recollect the inputs (e.g., from the input source), can correct (and/or determine the correction for) the predetermined events, and/or perform any suitable mitigation. In some variants, the correction data monitor can perform a message field range test (MFRT) and/or any related test to detect predetermined events. In related variants, the corrections data monitor can ensure that new satellite observations are consistent with almanac data and/or previous observations. However, the correction data monitor can function in any suitable manner. The global data (and/or predetermine-event mitigated global) can be valid (e.g., usable by the modeling engine) indefinitely, for a predetermined amount of time (e.g., 1 hour, 2 hours, 4 hours, 8 hours, 24 hours, 48 hours, 72 hours, 1 week, 1 month, 1 year, etc.), as long as the satellite(s) are in view (e.g., of the reference station, of the GNSS receiver, etc.), and/or for any suitable amount of time. The correction data monitor is preferably communicably coupled to the modeling engine, but can additionally or alternatively be coupled to the reference station observation monitor, the reliability engine, the positioning engine, and/or any suitable component. In variants, the corrections processing engine can include a metadata monitor, which functions to receive metadata. The metadata is preferably used by the modeling engine in determining the corrections, but can be used to validate the corrections, detect predetermined events, and/or be used in any suitable manner. The metadata can be associated with reference stations, satellites, satellite constellations, GNSS receiver, global metadata, local metadata, and/or any suitable source. In specific examples, metadata can include one or more of: reference station coordinates, earth rotation parameters, sun/moon ephemeris, ocean loading parameters, satellite attitude model, satellite phase center offset, satellite phase center variation, leap second, antenna type, receiver type, and/or any suitable data. The metadata monitor can additionally or alternatively function to validate the metadata. Metadata can be validated by comparing current metadata to previous metadata (e.g., almanac, a previous observation, etc.), by validating a metadata source (e.g., using CRC), and/or in any manner. The metadata monitor is preferably communicably coupled to the modeling engine, but can additionally or alternatively be coupled to the reference station observation monitor, the correction data monitor, the reliability engine, the positioning engine, and/or any suitable component. The modeling engine1520functions to generate corrections data useable by the positioning engine1100to estimate the position and/or velocity of the GNSS receiver1200. The modeling engine preferably generates corrections from reference station data (e.g., pseudorange and carrier phase from reference stations1600), global corrections data (e.g., satellite clock bias, satellite orbit, etc.), and/or metadata (e.g., reference station positions, ocean tide loading, antenna type, receiver type, etc.), but may additionally or alternatively generate corrections using sensor data, satellite observations (e.g., as detected at a GNSS receiver), and/or any input data. The modeling engine can be communicably coupled to the reference station observation monitor, the correction data monitor, the metadata monitor, the reliability engine, the positioning engine, and/or any suitable component. In one implementation of an invention embodiment, the modeling engine1520includes a set of PPP filters1521, an atmospheric modeler1522, and a correction generator1523, as shown inFIG.4. However, the modeling engine1520can be otherwise constructed. The PPP filter1521takes in reference station observations and estimates atmospheric delay for the reference stations1600. In this implementation, each PPP filter preferably estimates the atmospheric delay associated with a single reference station1600of the set of reference stations; alternatively, each PPP filter1521may correspond to any number of reference stations1600. The set of reference stations preferably correspond to a geographical region (e.g., a state; a country; a continent; a county; a parish; a city; an area between approximately 10 mi2and 2*108mi2such as 1000 mi2, 1*103mi2, 1*104mi2, 1*105mi2, 5*105mi2, 1*106mi2, 3*106mi2, 4*106mi2, 2*107mi2, etc.; etc.). However, the set of reference stations can correspond to any suitable locations. The atmospheric modeler1522generates a model of atmospheric delay over the geographical region covered by the set of reference stations1600. The atmospheric modeler1522preferably interpolates atmospheric delays as calculated by the PPP filters1521to generate a local (e.g., reference-station independent, but position dependent) model of atmospheric effects (e.g., tropospheric effect, ionospheric effects, etc.). For example, the atmospheric modeler1522may transform a set of tropospheric effect models corresponding to individual reference locations (each having a known position) to a regularly spaced grid. Additionally or alternatively, the atmospheric modeler1522may function in any manner (e.g., by creating a continuous interpolated model of atmospheric effects rather than a discrete grid, using some other arrangement of discrete points than a grid pattern, generated a fitted atmospheric model, using a neural network, based on a set of equations, etc.). Note that any interpolation technique may be used; for example, Gaussian processes such as kriging (which can also predicting uncertainty at the interpolated points); Hermite interpolation displacement interpolation rational interpolation; spline interpolation; polynomial interpolation; linear interpolation; piecewise constant interpolation; and/or any interpolation technique. The atmospheric model (e.g., local position dependent model) may be referred to as a “unified position-based model” (since it unifies the output of multiple models corresponding to individual reference sources). The correction generator1523generates corrections (e.g., a precise correction) based on the atmospheric model and other corrections data (e.g., global correction data such as satellite clock biases; metadata; etc.). The correction generator1523functions to generate corrections, usable by the positioning engine1100, from the atmospheric model generated by the atmospheric modeler1522. The correction generator1523may additionally or alternatively send or use correction data in any manner to correct position data (e.g., the correction generator1523may take as input estimated position and/or velocity and generate a corrected position data rather than corrections, such as a positioning correction, to be implemented by the positioning engine). The correction generator1523preferably additionally generates estimated uncertainty in the generated corrections (traditional PPP/RTK solutions are not capable of doing this). The estimated uncertainty in the generated corrections can be determined based on (e.g., given) uncertainty in the input parameters (e.g., error propagation, Monte Carlo simulations based on the uncertainty in the input parameters, etc.), empirically (e.g., from historical data), using simulations, heuristically, calculated, or be otherwise determined. The corrections generated by the correction generator1523preferably include corrections that correct for the effects of satellite orbit and clock error, satellite code and phase biases, atmospheric effects (e.g., ionospheric delay, rate of change of ionospheric delay, tropospheric delay such as zenith tropospheric delay, etc.), but may additionally or alternatively include corrections for any set of errors. In some variants, the corrections (e.g., corrections associated with a given satellite constellation, corrections associated with a given satellite, etc.) can include (e.g., be corrected for) regional-offsets, for example, by transmitting a residual regional offset associated with the correction(s). The reliability engine1530functions to verify the reliability of corrections (e.g., integrity of the corrections) generated by the modeling engine1520. The1530may also transmit or otherwise prepare corrections for use in positioning. The reliability engine can optionally issue flags (e.g., satellite flags, atmospheric flags, line of sight flags, etc.), which can prevent local systems from using a given set of corrections (e.g., if an integrity issue is detected). The reliability engine1530preferably verifies corrections based upon independent reference stations (and/or reference station observations), as shown inFIG.3. However, the reliability engine can verify corrections based upon common reference stations (e.g., reference station observations) and/or any suitable data sources. In one implementation of an invention embodiment, the reliability engine1530includes a residual computer1531(e.g., residual computation module), a correction residual monitor1532, and a velocity bias monitor1533, as shown inFIG.5. However, the reliability engine can include any other suitable set of modules. In this implementation, the residual computer1531calculates residual values from applying the corrections to the reliability reference station observations. However, the residual computer can directly compare the corrected reliability reference station observations and/or correct the reliability reference station observation in any suitable manner. The correction residual monitor1532can compare the residuals (and/or other corrected reliability reference station observations) to one or more thresholds. The correction residual monitor preferably operates in real-time, but can operate off-line, in near-real time, with a delay, and/or with any suitable timing. The threshold(s) can be: specific to different types of feared events or failure modes; global thresholds; or any other suitable threshold. The threshold can be determined empirically, using modelling (e.g., Monte Carlo modelling), using a neural network, using artificial intelligence, predetermined, constant (e.g., 1%, 2%, 5%, 10%, 20%, 25%, 33%, 50%, 75%, etc. of the corrections), manually determined, regulatorily determined, be the integrity bounds, and/or determined in any suitable manner. In a specific example, Monte Carlo simulations can be used to determine the impact of a correction (and potentially associated correction error) on the estimated position, on the real-valued carrier phase determination, on fixing the integer-valued carrier phase, on the estimated velocity, and/or on another parameter. The threshold can be set based on the results from the Monte Carlo modelling or otherwise determined. The thresholds can be associated with an integrity bound (e.g., when the residuals meet the threshold the corrections can enable a high accuracy estimated position, integer-valued carrier phase ambiguity determination, intermediate data generation, etc.), a confidence that an integrity bound can be achieved (e.g., a confidence that the corrections can enable a high accuracy estimated position, interger-valued carrier phase ambiguity determination, intermediate data generation, etc.), and/or any suitable results. When the residual is less than or equal to the threshold, the corrections can be transmitted to and/or used by the positioning engine (e.g., reliability of the corrections can indicate that the corrections are safe to use, will generate an estimated position and/or velocity with a target integrity, etc.). When the residual is greater than the threshold, the reliability of the corrections can: indicate that the corrections should not be used, indicate that the corrections can be used to estimate the position (and/or velocity) with an integrity exceeding a target integrity, and/or be used in any suitable manner. However, the corrections can not be transmitted to the positioning engine when the residual exceeds the threshold, and/or the residual can be used in any suitable manner. In one variation, individual residuals are compared against the thresholds. In a second variation, the residuals are compared against integrity bounds on one or more linear combinations of satellite frequencies and/or signals. This second variation can optionally determine whether a failure is due to a satellite correction or an atmospheric correction. The linear combinations can correspond to two frequency linear combinations (e.g., linear combinations of any two frequency satellite signals such as L1, L2, L3, L4, L5, E1, E2, E5a, E5b, E5AltBOC, E6, G1, G3, etc. such as Melbourne-Wübbena combination, etc.), three frequency linear combinations (e.g., linear combinations of any three frequency satellite signals such as Hatch-Melbourne-Wiibbena combination), four frequency linear combinations, n-frequency linear combinations (e.g., where n is an integer), linear combinations of satellite signals from different satellites (and/or satellite constellations) such as multisystem combinations, geometry-free combinations, wide-lane combinations, narrow-lane combinations, ionosphere-free combinations, and/or any suitable linear combinations. In a specific example of the second variation, the linear combination can correspond to a linear combination (e.g., a geometry-free linear combination such as a two frequency geometry-free linear combination, a three frequency geometry-free linear combination, etc.) that eliminates nondispersive (e.g., satellite clock, satellite orbit, troposphere effects, etc.) and/or amplifies (or isolates) dispersive components (e.g., ionosphere effects, atmospheric effects, etc.) to the GNSS signals. In this specific example, when the coefficients and/or residual observations from several frequencies exceed a threshold, the correction residual monitor can output a flag classifying the error as an atmospheric error. In a second specific example of the second variation, the linear combination can correspond to a linear combination (e.g., an ionosphere-free linear combination such as a two frequency ionosphere-free linear combination, a three frequency ionosphere-free linear combination, etc.) that eliminates dispersive and/or amplifies (or isolates) non-dispersive components to the GNSS signals. In this specific example, when the coefficients and/or residual observations from several frequencies exceed a threshold, the correction residual monitor can output a flag classifying the error as a satellite error. In a third variation, the correction residual monitor can include a trained classifier that ingests the residuals and optionally the satellite frequencies and/or signals, and outputs a flag classification or probability for each of a set of predetermined flags. However, the residuals can be otherwise compared against the thresholds. The velocity bias monitor1533compares the residuals change over time (e.g., to detect drift that may affect velocity estimation) to a threshold. In variants, the velocity bias monitor1533can detect abnormal drift that would impact velocity estimation. The threshold can be determined empirically, using modelling (e.g., monte carlo modelling), using a neural network, using artificial intelligence, predetermined, constant (e.g., 1%, 2%, 5%, 10%, 20%, 25%, 33%, 50%, 75%, etc. of the correction change over time), and/or in any suitable manner. When the change in the residuals is less than a threshold, the corrections can be transmitted to and/or used by the velocity engine (e.g., reliability of the corrections can indicate that the corrections are safe to use, will generate an estimated velocity with a target integrity, etc.). When the change in the residuals is greater than the threshold, the reliability of the corrections can: indicate that the corrections should not be used, indicate that the corrections can be used to estimate the velocity with an integrity exceeding a target integrity, and/or be used in any suitable manner. However, the corrections can not be transmitted to the velocity engine when the residual exceeds the threshold, and/or the residual can be used in any suitable manner. The reliability engine1530may additionally include a satellite threat model provider1534that provides information needed by the positioning engine1100to calculate high-integrity positioning (e.g., satellite probabilities of failure, constellation probabilities of failure, etc.). The satellite threat model provider can optionally provide threat probabilities and/or amplitude to third parties (e.g., manufacturer backend, etc.). The satellite threat model provider1534's output is preferably independent of the correction residuals, but can be dependent on the correction residuals. The satellite threat model provider can be a database, API endpoint, or other data source. The data of the satellite threat model provider can be determined: empirically (e.g., from historical data), using simulations, heuristically, calculated, or otherwise determined. Finally, the reliability engine1530may additionally perform regional residual correction generation. As mentioned previously, in some implementations, the system1000may utilize data corresponding to multiple satellite constellations to provide high-integrity positioning. In these implementations, the reliability engine1530may send multiple sets of corrections data (e.g., corresponding to different satellite constellations, corresponding to different individual and/or subsets of satellites, etc.). These sets of correction data are preferably independent of each other, but can be dependent on each other. As with corrections generally, they may be generated using data from any set of reference stations (e.g., overlapping, non-overlapping). While the reliability engine1530may transmit multiple sets of corrections in their entireties, the reliability engine1530may additionally or alternatively transmit a primary set of corrections and then “residual” secondary sets of corrections (e.g., the primary set of corrections is transmitted in its entirety, and secondary sets are transmitted as modifications to the primary set of corrections). This can reduce the overall amount of information needed to be transmitted to GNSS receivers. The reference stations1600function to provide reference station observations (e.g., pseudorange and/or carrier phase data such as corresponding to one or more satellites, corresponding to one or more satellite constellations, etc.) used to generate corrections. Reference stations1600preferably have a location known to a high degree of accuracy. Reference station location is preferably the location of the antenna used to receive satellite signals, but can be any suitable location. Reference station location may be determined in any manner yielding a high degree of accuracy; for example, reference station location may be determined by a number of receivers set around the reference station at vertical and horizontal reference points. Note that while reference stations1600are preferably fixed in location, they may additionally or alternatively be mobile. Station position is preferably re-determined to high accuracy before moved reference stations re-start providing reference station observations; additionally or alternatively, reference stations may provide reference station observations before location re-determination (for example, for use in attitude estimation; alternatively, data may be provided but not used). Note that fixed reference stations1600may, over time, “self-survey” their own positions to a reasonably high degree of accuracy. Reference stations1600preferably provide phase and pseudorange data for multiple satellite signals and the location of the reference station1600via the internet, but may additionally or alternatively provide data by any other suitable method (e.g., transmission by cellular radio modem). Reference station data is preferably made available directly to the system1000, but may additionally or alternatively be processed or aggregated before being made available to the system1000. Reference stations1600preferably have one or more satellite receivers and generate corrections based on those receivers. The number and quality of satellite receivers used by a reference station (or other factors, like antenna type/size/location) may determine the accuracy of reference station data. Reference stations1600(or other sources of reference station data; e.g., a reference source that creates correction data from multiple reference stations) may be ordered or grouped by reference station quality (e.g., accuracy of corrections) and/or locality (e.g., if corrections are desired for a particular GNSS receiver, reference stations may be ordered or grouped by distance to that receiver). In specific variants, the system can include a plurality of sets of reference stations. For example, reference station observations corresponding to a first set of reference stations (e.g., corrections reference stations) can be used to generate the corrections. Reference station observations corresponding to a second set of reference stations (e.g., reliability reference stations) can be used to validate the corrections and/or determine a reliability of the corrections. The first set of reference stations are preferably regional reference stations (e.g., covering and/or spanning a region such as a city, county, parish, state, country, cluster of countries, continent, etc.). Reference stations of the first set of reference stations can be separated by about 1 mi, 5 mi, 10 mi, 20 mi, 30 mi, 50 mi, 100 mi, 150 mi, 200 mi, 300 mi, 500 mi, 1000 mi, 2000 mi, and/or have any suitable separation. The second set of reference stations are preferably local reference stations (e.g., reference stations within 0.1 mi, 0.5 mi, 1 mi, 2 mi, 3 mi, 5 mi, 10 mi, 20 mi, 50 mi, etc. of the GNSS receiver). However, the first and second set of reference stations can include any suitable reference stations located in any suitable area. In specific variants of the system including one or more sensors1700, the sensors preferably function to measure (e.g., provide) sensor data (e.g., auxiliary data, validation data, back-up data, supplemental data, etc.). The sensor data can be used by the positioning engine and/or corrections processing engine to assist (e.g., speed-up, correct, refine, etc.) the position estimation, velocity estimation, corrections generation, corrections validation, carrier phase determination (e.g., estimating the integer-valued carrier phase ambiguity), and/or any suitable process. In a specific example, the sensor data can be used to estimate the receiver position at times when satellite observations are not received (e.g., in an urban canyon; due to an obstruction such as a billboard, weather, etc.; etc.). However, the sensor data can be used at any suitable time and in any suitable manner. The sensors are preferably in communication with the computing system, but can be in communication with the GNSS receiver, one or more reference station, and/or with any suitable component. The sensors can be: on-board the external system, integrated into the mobile receiver (e.g., onboard the same external system, onboard a different external system), and/or be separate from the mobile receiver, or otherwise associated with the mobile receiver. The sensor data is preferably for the external system, but can additionally or alternatively be for the mobile receiver or any other suitable sensor reference. The sensor data can include: inertial data (e.g., velocity, acceleration), odometry, pose (e.g., position, orientation), mapping data (e.g., images, point clouds), temperature, pressure, ambient light, and/or any other suitable data. The sensors can include one or more of: inertial moment unit (IMU), inertial navigation system (INS), accelerometer, gyroscope, magnetometer, odometer (e.g., visual odometry, wheel odomoetry, etc.), and/or any suitable sensors can be included. In a specific example, the system can include a plurality of INSs (e.g., 2, 3, 5, 10, etc. INS sensors). Each INS of the plurality preferably generates an independent estimate of the position and/or velocity (e.g., using dead reckoning). However, two or more of the INSs can generate dependent estimated of the position and/or velocity. The independent dead reckoning positions and/or velocities can be used to validate the dead reckoning position. In a specific variant, the dead reckoning position (and/or velocity) estimate can be the position (and/or velocity) range that surrounds the overlapping region between the individual INS position (and/or velocity) estimates. In a second specific variant, the dead reckoning position (and/or velocity) estimate can be the overlapping position (and/or velocity) range from the individual INS position (and/or velocity) estimates. However, the dead reckoning position and/or velocity can be determined in any suitable manner. Specific Examples of the System and Method of Use In an illustrative example, as shown inFIG.12, a system for estimating the position of a GNSS receiver can include a remote server that can include: a reference station observation monitor configured to: receive a first set of reference station observations associated with a first set of reference stations and a second set of reference station observations associated with a second set of reference stations; and detect a predetermined event in the first set of reference station observations and the second set of reference station observations; when the predetermined event is detected, mitigate an effect of the predetermined event; a modeling engine configured to generate corrections based on the first set of reference station observations; and a reliability engine configured to determine a reliability of the corrections generated by the modelling engine based on the second set of reference station observations. The system can additionally or alternatively include a positioning engine executing on a computing system collocated with the receiver. The positioning engine can include: an observation monitor configured to: receive a set of satellite observations from a set of global navigation satellites corresponding to at least one satellite constellation; detect a predetermined event in the set of satellite observations; and when the predetermined event in the set of satellite observations is detected, mitigating an effect of the predetermined event in the set of satellite observations; a float filter configured to determine a real-valued carrier phase ambiguity estimate based on the set of satellite observations and the corrections having a reliability greater than a predetermined threshold; an integer ambiguity resolver configured to fix the real-valued carrier phase ambiguity estimate to an integer-valued carrier phase ambiguity, wherein the integer-valued carrier phase ambiguity is validated in a multi-step process; and a position filter configured to estimate a position of the receiver, wherein an integrity risk and a protection level of the estimated position depends on a validation step of the multi-step process. In a first variation, the external system (e.g., using the position) specifies the integrity risk and/or protection level required for a given functionality, wherein the position output by the system is not used if the integer-valued carrier phase ambiguity is not validated to the specified integrity risk and/or protection level (e.g., and an auxiliary positioning system used instead). Alternatively, the functionality is not enabled when the integer-valued carrier phase ambiguity is not validated to the specified integrity risk and/or protection level. In a second variant, the external system's enabled functionalities and/or operation conditions are adjusted based on the integrity risk and/or protection level of the instantaneous or historic position. In an illustrative example, the notification presentation triggering distance (e.g., a proximity distance for a proximity alert) can be a function of the integrity risk and/or the protection level (e.g., increased when the integrity risk increases). In a specific example, a method for determining a position of a global navigation satellite system (GNSS) receiver (e.g., using the system, using any suitable system), can include at a remote server: receiving a first set of reference station observations (e.g., associated with a first set of reference stations); detecting a first set of predetermined events; when at least one predetermined event of the first set of predetermined events is detected, mitigating an effect of the detected predetermined event; generating an atmospheric model based on the first set of reference station observations; determining corrections based on the atmospheric model; and validating the corrections using a second set of reference station observations (e.g., associated with a second set of reference stations). The method can additionally or alternatively include: at a computing system collocated with the GNSS receiver: receiving the validated corrections from the remote server; receiving a set of satellite observations from a set of global navigation satellites corresponding to at least one satellite constellation; detecting a second set of predetermined events; when at least one predetermined events of the second set of predetermined events is detected, mitigating an effect of the detected predetermined events of the second set of predetermined events; resolving a carrier phase ambiguity for the set of satellite observations based in part on the validated corrections; validating the carrier phase ambiguity using a multistep validation process; estimating a position of the GNSS receiver based on the validated carrier phase ambiguity, wherein an integrity risk and a protection level of the estimated position depend on which step of the multistep validation process is used to validate the carrier phase ambiguity. In this specific example, the validating the corrections can include correcting the second set of reference station observations using the corrections; determining residuals for the set of corrected satellite observations; and validating the corrections when the residuals are below a correction validation threshold. In this specific example, generating the atmospheric model can include estimating an atmospheric delay associated with each reference station of the first set of reference stations using a PPP filter and interpolating (e.g., using kriging) between the atmospheric delays associated with each reference station to generate the atmospheric model. In this specific example, the multistep validation process can include a first validation step, wherein the carrier phase ambiguities are validated simultaneously; a second validation step, after the first validation step, wherein a first subset of carrier phase ambiguities corresponding to a first subset of satellites of the set of global navigation satellites are validated simultaneously and a second subset of carrier phase ambiguities corresponding to a second subset of satellites of the set of global navigation satellites are validated simultaneously; and a third validation step, after the second validation step, wherein the second validation step is repeated at least twice. In this specific example the first subset of satellites can correspond to a first satellite constellation and the second subset of satellites can correspond to a second satellite constellation different from the first satellite constellation. However, the first and second set of satellites can correspond to subsets of the same satellite constellation, combinations of satellite combinations, and/or any suitable satellites. In this specific example, the integrity risk and the protection level of the position of the GNSS receiver can be at most 10−4per hour and 2 m, respectively, when the carrier phase ambiguities are validated to a first validation step of the multistep process; at most 10−6per hour and 2 m, respectively, when the carrier phase ambiguities are validated to a second validation step of the multistep process; and at most 10−7per hour and 3 m, respectively, when the carrier phase ambiguities are validated to a third validation step of the multistep process. In this specific example, resolving the carrier phase ambiguity can include determining a real-valued phase ambiguity using a Kalman filter; and fixing the real-valued phase ambiguity to an integer-valued phase ambiguity including decorrelating the real-valued phase ambiguity using at least one of a IAMBDA algorithm or an MIAMBDA algorithm. In this specific example, the first set of predetermined events can correspond to high dynamic events (e.g., at least one of environmental feared events, network feared events, satellite clock drift of at most 1 cm/s, issue of data anomaly, erroneous broadcast ephemeris, constellation failure, reference station multipath, and reference station cycle slip) and the second set of predetermined events can correspond to low dynamic events (e.g., at least one of code carrier incoherency, satellite clock step error, satellite clock drift greater than 1 cm/s, pseudorange multipath, carrier phase multipath, carrier phase cycle slip, non-line of sight tracking, false acquisition, Galileo binary offset carrier second peak tracking, and spoofing). This specific example can further include, independent of estimating the position of the GNSS receiver, estimating a velocity of the GNSS receiver using time-differenced carrier phase measurements. In this specific example, at least one of the protection level of the estimated position and a protection level of the velocity can be determined using an advanced receiver advanced integrity monitoring (ARAIM) algorithm using only carrier phase ambiguities. This specific example can include automatically operating a vehicle based on at least one of the estimated position and the velocity, wherein the GNSS receiver is coupled to the vehicle. This specific example can include when satellite observations corresponding to one or more satellites of the set of global navigation satellites are unavailable, determining the position of the GNSS receiver using dead reckoning based on data associated with an inertial navigation system; and validating the position determined using dead reckoning by comparing a first dead reckoning position determined based on the data associated with the inertial navigation system with a second dead reckoning position determined based on data associated with a second inertial navigation system. The method is preferably implemented by the system but may additionally or alternatively be implemented by any system for estimating the position and/or velocity of a GNSS receiver and/or external system based on satellite observations. However, the system can include any suitable components. However, the method can include any suitable steps and/or substeps. Specific Example of a System and Method for RTK Satellite Positioning As shown inFIG.8, a specific example of the method200for Real Time Kinematic (RTK) satellite positioning includes: at a mobile receiver, receiving a navigation satellite carrier signal S210, receiving a phase correction signal from a reference station S220, calculating integer phase ambiguity S230, and calculating receiver position S240. Step S210includes receiving a navigation satellite carrier signal. Step S210functions to provide the mobile receiver with a phase measurement and a pseudo-range measurement that can be used, along with a phase correction signal (received in Step S220) to calculate receiver position. Navigation receiver carrier signals are preferably received at the L1 frequency (1575.42 MHz), but may additionally or alternatively be received at the L2 frequency (1227.60 MHz) or any other suitable frequency. Navigation satellite carrier signals received in Step S210may include GPS signals, GLONASS signals, Galileo signals, SBAS signals and/or any other suitable navigation signal transmitted by a satellite. Step S210preferably includes receiving the navigation satellite carrier signal (which is an RF signal) at an RF antenna and converting the signal to a digital baseband signal. This digital baseband signal is preferably used for two tasks by Step S210: calculating the pseudo-range from the receiver to the satellite (using standard GNSS time-of-flight techniques) and measuring the relative phase of the carrier signal. Step S210is preferably performed for multiple satellites. The use of pseudo-range and phase data from multiple satellites can provide for more accurate positioning, as described in later sections. If receiver carrier signals are received at both L1 and L2 frequencies, Step S210may include combining the L1 and L2 frequency signals for each satellite to create a beat signal. The resulting signal (i.e., the beat signal) has a center frequency significantly lower than either the L1 or L2 signals (˜347.82 MHz), which allows for a smaller set of possible integer ambiguity values for a given prior (e.g., |N|≤10 for an L1 signal, |N|≤2 for the example beat signal). The resulting signal may additionally or alternatively possess other desirable properties (e.g., reduction in ionospheric error). In a variation of a preferred embodiment, the method200includes Step S211: transmitting carrier signal data (e.g., pseudo-range and/or phase data) from the receiver to a remote computer (e.g., a computer at a reference station, a cloud computing server). In this variation, Steps S220through S240may additionally be performed on the remote computer. Step S220includes receiving a phase correction (or phase observation) signal from a reference station. Step S220functions to receive phase correction information used to determine, for a given satellite signal, the location of the mobile receiver. Step S220preferably includes receiving phase correction information for each satellite signal received in Step S210, but may additionally or alternatively include receiving phase correction information for only a subset of the satellite signals received in Step S210. If Step S220include receiving phase correction information for only a subset of satellite signals in Step S210, Step S220may include estimating phase correction information for any of the subset of satellite signals for which phase correction information is not received. Step S220includes receiving phase correction information from at least one reference station, but may also include receiving phase correction information from additional reference stations. Step S220may include receiving phase correction information for some satellites from one reference station while receiving phase correction information for other satellites from another reference station. Additionally or alternatively, Step S220may include receiving phase correction information from multiple reference stations for a single satellite signal. Step S220preferably include receiving phase correction signals over a UHF radio (e.g., at 915 MHz), but may additionally or alternatively include receiving phase correction signals over any suitable communication medium (e.g., an internet connection, a cellular connection). Phase correction signals preferably include carrier signal phase (as measured at the reference station) and reference station location information (or other identifying information linked to location). Phase correction signals may additionally include pseudo-range data from the reference station, positioning code data, or any other relevant data. Phase correction signals are preferably formatted as RTCMv3 messages, but may additionally or alternatively be formatted according to any suitable standard or method. Reference stations used for transmitting phase correction signals may include dedicated RTK reference stations, Continuously Operating Reference Stations (CORS), Network RTK solutions (including virtual reference station solutions), or any other suitable reference station. Step S230includes calculating integer phase ambiguity. Step S230functions to allow for determination of the absolute difference in phase between a satellite carrier signal received at a reference station and a satellite carrier signal received at a mobile receiver, which in turn enables the position of the mobile receiver relative to the reference station to be calculated. Integer phase ambiguity is preferably calculated using double differenced measurements of pseudo-range and relative phase. Double-differenced measurements are preferably calculated by taking the difference of values for the difference of receiver and reference values. For example, the double-differenced measurements of pseudo range and phase for two satellites (satellite 1 and 2) can be modeled as ρ12=(ρmr−ρref)i=1−(ρmr−ρref)i=2 ϕ12=(ϕmr−ϕref)i=1−(ϕmr−ϕref)i=2 where i is the satellite index, ρmr, φmrare pseudo-range and phase measurements at the mobile receiver, and ρref,ϕrefare pseudo-range and phase measurements at the reference station. More specifically, for a mobile receiver and a reference station separated by a vector b, the double differenced equations for pseudo-range ρ and phase ϕ can be written as ∇Δ⁢ρ=(ρ10⋮ρn⁢0)=(e1-e0⋮en-e0)·b+ϵρ=DE·b+ϵρ⁢∇Δϕ=(ϕ10⋮ϕn⁢0)=DE·bλ+N+ϵρ where enis the unit line of sight vector to satellite n, ∈ρrepresents noise, λ is the wavelength of the carrier signal and N is integer phase ambiguity. The use of double-differenced measurements allows for the cancellation of satellite clock errors, receiver clock errors, and some atmospheric error. Step S230preferably includes two substeps: generating a set of hypotheses S231and performing hypothesis testing on the set of hypotheses S232. Additionally or alternatively, S230may include calculating integer phase ambiguity N using any number or type of steps. Step S231functions to produce a set of possible values for N as well as perform iterative refinement on that set. Step S231preferably includes producing a set of possible values for N using a Kalman filter process. Kalman filters are recursive filters that estimate the state of a linear dynamic system based on a series of noisy measurements. In general form, the measurement equation appears as zi=Hixi+vi where ziis the measurement at time (or step) i, xiis the true state, viis observation noise (zero mean and with known covariance), and Hiis the observation model that maps the true state space into the observed space. The Kalman filter model further assumes that there is a relationship between states at different times given by xi=Fixi-1+wi where wiis process noise (also zero mean and with known covariance) and Fiis the transition model that maps true state at time i−1 to true state at time i. In particular, Step S231preferably includes producing a set of possible values for N using a type of Kalman filter known as a Bierman-Thornton filter; additionally or alternatively, Step S231may use any suitable process to produce possible values for N. Starting with the equation (∇Δ⁢ϕiλ⁢∇Δ⁢ϕi-∇Δ⁢ρi)=(1λ⁢D⁢EiI0λ⁢I)⁢(biN) and noting thatfor any matrix A operating on a normally distributed random variable x with covariance Σ, the random variable y=Ax will have covariance AΣAT,for the matrix A there are subspaces Ker[A] for which any vectors x∈Ker[A] have the property 0=Ax a matrix Qican be constructed such that 0=QiDEiand this matrix can be applied to form a second equation: (Qi⁢∇Δ⁢ϕiλ⁢∇Δ⁢ϕi-∇Δ⁢ρi)=(1λ⁢Qi⁢DEiQi0λ⁢I)⁢(biN)=(0Qi0λ⁢I)⁢(biN)=(Qiλ⁢I)⁢N This equation relates phase change and pseudo-range directly to N (without inclusion of the baseline vector b). This equation can be used as the measurement equation of the Kalman filter of Step S231without a corresponding dynamic transition model. Calculating the value of N directly (instead of attempting to calculate a Kalman filtered baseline) allows baseline states to be removed from the filter; because N is constant, no dynamic transition model is needed. Removing the requirement for the dynamic transition model can substantially reduce the time and/or memory required to compute solutions, additionally, errors that might occur in a dynamic model cannot be explained away as errors in estimates of N. Computing Qirequires knowledge of the line of sight vectors contained in DEi. Step S231preferably includes computing the line of sight vectors from an estimate of b, (which, while not directly calculated in previous calculations, can be found using a set of phase measurements and an estimate for N). Estimates of b are preferably found as in Step S240, but may additionally or alternatively be found by any suitable method. Additionally or alternatively, Step S231may include computing the line of sight vectors from reference station data, or in any other suitable manner. For a particular set of line of sight vectors, Qiis preferably computed by generating a matrix whose rows form a basis for the left null space of DE, or Ker[DET]. This generation is preferably done via QR decomposition, but may additionally or alternatively be performed using singular value decomposition or any other suitable method. From these equations, a set of hypotheses can be generated. Measurements arising from ambiguity vector N are expected to be normally distributed and to have a mean given by the equation (Qi⁢∇Δ⁢ϕiλ⁢∇Δ⁢ϕi-∇Δ⁢ρi)=(Qiλ⁢I)⁢N The corresponding covariance is determined from the results of the Kalman filter of Step S231, reducing the set of likely hypotheses (as compared to covariance derived directly from the measurement model). From this information, a distribution of the set of hypotheses can be found. Ideally, all hypotheses within a particular confidence interval are tested. The number of hypotheses contained within this interval (hereafter referred to as the testing set) is dependent on the covariance for the N distribution. Since N needs to be computed for several satellites to determine position of the mobile receiver, the total set of hypotheses that need to be tested depends both on the covariances for each satellite's associated N value, but also the number of satellites. Hypotheses are preferably bounded (for a particular confidence interval) by an ellipsoid defined by the covariance matrix. The ellipsoid defined by the covariance matrix is often extremely elongated, resulting a time and computation intensive hypothesis generation process. To reduce the time and computational resources required to perform this process, Step S231may include performing a decorrelating reparameterization on the hypothesis search space, as shown inFIG.9. Performing this reparameterization transforms the hypothesis space such that the elongated ellipsoid is transformed to an approximate spheroid; this transformation allows hypotheses to be identified substantially more easily. The hypotheses can then be transformed by an inverse transformation (inverse of the original reparameterization) to be returned to the original coordinate space. Step S231preferably includes generating hypotheses for the testing set according to memory limits on the mobile receiver. For example, if a receiver features 64 kB of memory for storing hypotheses and storing initial hypotheses for eight satellites requires 70 kB (while storing initial hypotheses for 7 satellites requires only 50 kB), Step S231may include generating a set of initial hypotheses for 7 satellites, and then adding hypotheses for the eighth satellite after enough of the initial hypotheses for the 7 satellites have been eliminated. S231may additionally or alternatively include waiting for the covariance of the Kalman filter's estimate to shrink before generating hypotheses if the current covariances are large enough that memory cannot store testing sets at some threshold confidence level for at least four satellites. Though Step S231is preferably performed before Step S232is performed for the first time, Step S231may be performed again at any suitable time to modify the set of hypotheses tested. For example, as the probabilities for each set of hypotheses are refined by Step S232, hypotheses may be added to or subtracted from the testing set by Step S231. For example, if Step S231produces a testing set A and later adds a testing set B containing new satellites, the new testing set may be generated by taking the Cartesian/outer product of A and B, where probabilities are initialized via P⁡(A,B)=P⁡(A)❘"\[LeftBracketingBar]"B❘"\[RightBracketingBar]" where the denominator is the number of hypotheses in set B and P(A) is the probability of A generated in Step S232. Step S232may include initializing probabilities via P(A, B)=P(A) (as Step S232preferably tracks relative probabilities as opposed to absolute probabilities). If Step S231includes dropping a satellite (e.g., if the satellite can no longer be tracked), this can be accounted for by marginalizing hypotheses via P(A)=ΣB∈BP(A, B) over all hypotheses still in the set. Step S232preferably includes tracking probabilities in log space; for li=ln[pi]: ln[p1+p2]=ln[el1+el2]=l1+ln[1+el1−l2] Step S232preferably includes approximating the logarithm term of l1+ln[1+e11l2] via Taylor series in probability or log probability space; additionally or alternatively, the logarithm term may be estimated as zero as the exponential term may be very small compared to 1. This approximation may result in reducing computation time and/or memory. Step S232functions to test the hypotheses of the refined set generated by Step S231in order to identify the hypothesis corresponding to the actual value of N. Step S231preferably includes generating hypotheses using LAMBDA or MLAMBDA algorithms using the means and covariances generated by a Kalman filter, but may additionally or alternatively include generating hypotheses using any other mean or covariance estimate of N or according to any suitable algorithm. Step S232preferably includes testing hypotheses using a variation of the following Bayesian update formula for a hypothesis h given an observation y: ln[Pi(h)]=li(h)=li-1(h)+ln[P⁡(yi|h)]-ηi⁢where⁢ηi=ln[∑h∈ℋP⁡(yi❘h)⁢Pi-1(h)] Variables to be used in this equation are preferably defined according to the following definitions: ri=Q∼i(∇Δ⁢ϕi∇Δ⁢ρi);Q∼i=(Qi0λ⁢I-I) where riis distributed with mean and covariance r¯i⁢N=(Qiλ⁢I)⁢N;∑i=Q∼i⁢Cov[(∇Δ⁢ϕi∇Δ⁢ρi)]⁢Q∼iT For observations yi=riand hypotheses h=N, the previous hypothesis update formula can be written as li(N)=li-1(N)−χi2(N)+ln[ki]−ηi;χi2(N)=(ri−riN)TΣi−1(ri−riN) where kiis the scaling factor in the normal distribution. Step S232preferably includes running the hypothesis test above until the ratio between the probabilities of the best two hypotheses reaches a set threshold; additionally or alternatively, Step S232may include stopping the hypothesis test based on any other suitable condition (e.g., time). Step S232may additionally or alternatively include dropping hypotheses from further testing if their associated pseudo-likelihood, given by ll″(N)=li-1″(N)-χi2(N)-lmaxi;⁢lmaxi=maxN[li-1″(N)-χi2(N)] is less than some set threshold; the likelihood ratio test may be performed in single precision for speed and numerical stability. Additionally or alternatively, Step S232may include using any other suitable metric for removing unlikely hypotheses; for example, removing hypotheses with a probability ratio (relative to the best hypothesis) below some threshold value. Step S232preferably includes calculating Σiand rionly once per observation step, as opposed to once per hypothesis N; additionally or alternatively, Step S232may include calculating these at any suitable time. Step S240includes calculating receiver position. Step S240functions to calculate the position of the mobile receiver based on the value for N computed in Step S230. After N has been determined, the baseline vector b for the mobile receiver is determined from the value(s) for N and phase/pseudo-range measurements by Step S240; this gives the position of the mobile receiver relative to a reference station. If the location of the reference station is known, Step S240may include calculating absolute position of the mobile receiver (by applying b to the reference station coordinates). Step S240may additionally include transmitting or storing receiver position data. For instance, Step S240may include transmitting receiver position data from the receiver to an external computer over UHF radio, the internet, or any other suitable means. All steps of the method200are preferably performed on a mobile receiver, but additionally or alternatively, any step or set of steps may be performed on a remote platform (e.g., cloud computing servers if the mobile receiver has internet access). Specific Example of a System and Method for Generating Satellite Positioning Corrections A system for distributed dense network processing of satellite positioning data includes a global correction module, a plurality of local correction modules, and a correction generator. The system may additionally include one or more interpolators. For example, data from the global correction module may be used to initialize local correction modules, may be passed to the correction generator via local correction modules, may be passed directly to the correction generator, and/or may be utilized by the system in any other manner. The system functions to generate correction data to be used by a mobile GNSS (Global Navigation Satellite System) receiver or any other GNSS receiver for which position/velocity/timing data correction is desired. Such a receiver (henceforth referred to as a mobile receiver) may operate on any satellite navigation system; e.g., GPS, GLONASS, Galileo, and Beidou. The correction data is preferably used to improve GNSS solution accuracy, and may take the form of PPP corrections, RTK corrections, or any other type of corrections (discussed in the section on the correction generator). Flexibility in the form of corrections data is an inherent and distinguishing aspect of the system over traditional position correction systems. Rather than attempting to generate corrections solely from a small set of high-quality global reference stations (as in PPP) or by comparing data in mobile receiver/reference station pairs (as in RTK), the system collects data from reference stations (and/or other reference sources), and instead of (or in addition to) applying this data directly to generate connections, the data is used to generate both a global correction model (in the global correction module) and a number of local correction models (in local correction modules). Output of these models are then passed to the correction generator, which can use said output to generate correction data in any form. Further, the correction generator may cache and/or (with use of the interpolator) spatially interpolate corrections data to provide high quality corrections to mobile receivers regardless of correction capability (e.g., whether the receiver can process RTK/PPP corrections) and location of individual base stations. By operating in this manner, the system may provide a set of corrections that (while usable with PPP receivers) suffers from little of PPP's long convergence time issues, with solution complexity scaling directly with the number of reference stations N (unlike RTK, in which solution complexity scales at least with the number of possible pairs; i.e., N2. In fact, many current solutions scale with N3or worse). Further, since corrections are preferably calculated using local correction models that may depend on any number of single reference stations (rather than specific reference station pairs), corrections are substantially more robust to loss of a base station. Further, the flexible nature of the system enables some functions (such as spatial interpolation and caching) to be performed much more generally than would be possible with RTK; while the concept of a “virtual reference station” is known within RTK (also referred to as a “pseudo reference station”), virtual reference stations typically involve the interpolation of RTK corrections data in real time (and, as discussed before, error correction scales in complexity with N2). In contrast, interpolation in the system can be limited to specific aspects of global and/or local corrections models, providing more robustness to error and better insight as to error causes. Further, unlike RTK, which requires real-time corrections data, the model-based system may cache or otherwise retain model parameters even when data is limited (e.g., when a reference station suddenly becomes unavailable). The system is preferably implemented in software as part of a networked distributed computing system, but may additionally or alternatively be implemented in any manner. The global correction module functions to maintain one or more global correction models. Global correction models preferably accomplish two functions-correcting for global error (i.e., error in GNSS positioning that does not vary substantially in space) and error-checking/seeding local error estimates (where local error refers to error that does vary substantially in space or per GNSS receiver). Note that seeding here refers to providing a coarse estimate as a starting point for further refinement. The global correction module preferably takes as input raw data from reference stations and the mobile receiver (e.g., carrier phase data, pseudorange data, reference station location etc.) but may additionally or alternatively take in processed data from reference stations and/or the mobile receiver (e.g., positioning code data) or data from any other source (e.g., PPP global corrections data sources on the internet, calibration data for particular satellites or receiver types from a manufacturer or other source, satellite orbit data, satellite clock data). Reference stations preferably have one or more satellite receivers and generate corrections based on those receivers. The number and quality of satellite receivers used by a reference station (or other factors, like antenna type/size/location) may determine the accuracy of reference station data. Reference stations (or other sources of reference station data; e.g., a reference source that creates correction data from multiple reference stations) may be ordered or grouped by reference station quality (e.g., accuracy of corrections) and/or locality (e.g., if corrections are desired for a particular mobile receiver, reference stations may be ordered or grouped by distance to that receiver). The global correction module preferably explicitly models the effects of global parameters on GNSS navigation. These parameters preferably include satellite clock error, satellite orbit error, satellite hardware bias, satellite antenna phase windup, phase center offset (PCO), and phase center variation (PCV) (all of which are per satellite, but generally do not vary spatially), solid earth tides, solid earth pole tides, ocean tidal loading (which vary spatially and temporally, but in a predictable manner), as well as coarse global models of ionospheric and tropospheric effects (in this case, global models may not be accurate enough by themselves to model ionospheric and tropospheric effects, but they provide a starting point for later refinement). Additionally or alternatively, the global correction module may model the effects of any parameters on GNSS signals as received by a mobile receiver or a reference station. The global correction module preferably additionally maintains uncertainty estimates for at least some global parameters; additionally or alternatively, the global correction module may not maintain uncertainty estimates. Note that for receivers used in generating/updating the global model, the global correction module may additionally or alternatively model effects unique to those receivers; e.g., receiver clock error, receiver hardware bias, and receiver antenna phase windup/PCO/PCV (which are unique to a given receiver but not directly dependent on location). The plurality of local correction modules function to maintain local correction models. Local correction models preferably correct for spatially local variance of effects on GNSS signals as well as for effects that are specific to particular receivers/reference stations. Local correction modules preferably correspond to (and receive data from) a single reference station. In some embodiments, a local correction module exists for each reference source or station, such that each local correction module takes input from a unique reference source. Additionally or alternatively, local correction modules may correspond to and/or couple to reference stations in any manner; for example, a local correction module may be used to model a number of reference stations within a particular spatial region. Additionally or alternatively, the system may include one or more local correction modules corresponding to mobile receivers. A local correction module preferably takes as input raw data from corresponding reference stations/mobile receivers (e.g., carrier phase data, positioning code data, reference station location, pseudorange, navigation data, message data, etc.) but may additionally or alternatively take in processed data from reference stations and/or the mobile receiver (e.g., broadcast ephemerides and almanacs) or data from any other source. The local correction module preferably additionally takes data from the global correction module (e.g., to initialize a local correction model for a new reference station and/or to compensate for global components of local error). Additionally or alternatively, the local correction module may take data from any source (e.g., the local correction module may take in only reference data and not any output of the global correction module). The local correction module preferably explicitly models the effects of local parameters on GNSS navigation. These parameters preferably include tropospheric and ionospheric effects (which are not directly dependent on reference station but vary spatially/temporally), receiver clock error, receiver hardware bias, receiver antenna phase windup/PCO/PCV (which are unique to a given receiver/antenna but not directly dependent on location), carrier phase ambiguity, and other position error (which covers effects not otherwise explicitly modeled). Additionally or alternatively, the local correction module may model the effects of any parameters on GNSS signals as received by a mobile receiver or a reference station. Like the global correction module, the local correction module may additionally or alternatively maintain/track parameter uncertainty estimates. In particular, the local correction module preferably models tropospheric and ionospheric effects as a function of receiver position. Ionospheric effects may be difficult to model. It is difficult to differentiate the effects on GNSS signals of ionospheric effects from those of receiver hardware bias; however, ionospheric effects tend to vary more quickly in time than receiver hardware bias. Accordingly, local correction modules may attempt to separate ionospheric effects and effects of hardware bias based on rate of change of the combination of these effects. Further, ionospheric effects vary significantly (and not in an easily predictable manner) based not only on position, but also based on the path a GNSS signal takes through the ionosphere. Accordingly, a model of ionospheric effects may need to take each of these factors into account. In one implementation of an invention embodiment, local correction modules model ionospheric effects per GNSS source (e.g., per satellite) as a function of both position (e.g., pierce point—where the line of sight between a receiver and a satellite intersects the atmospheric layer) and pierce angle (e.g., as an analogue for the signal path), as shown inFIG.10A. Ionospheric effect may also be modeled with respect to frequency. Further, the ionosphere is preferably modeled as one or more thin shells; however, the ionosphere may be additionally or alternatively modeled in any manner. Likewise, ionospheric effects may be modeled in any manner; as opposed to modeling ionospheric effects as a function of position and angle, ionospheric effects may be modeled based on the set of pierce positions for each shell of the ionospheric model, as shown inFIG.10B. In contrast, tropospheric effects are not substantially variant in frequency (for most satellite frequencies); further, while tropospheric effects do depend on angle, they typically do so in a predictable manner. Accordingly, local correction models preferably model tropospheric effects solely based on position (e.g., pierce point) with a static correction for angle (roughly corresponding to 1/cos θ where θ is the angle from vertical). Additionally or alternatively, tropospheric effects may be modeled in any manner. Models of the global correction module and local correction modules are preferably weakly coupled; that is, changes in model in either case propagate to each other, but in a damped manner (which allows for reaction to changing conditions without bad data or reference station loss causing correction accuracy to break down). Additionally or alternatively, the models may be coupled in any manner (or not at all). Models of the global correction module and local correction modules are preferably maintained/updated via a Kalman filter or Gaussian process, but may additionally or alternatively be maintained/updated in any manner. The global correction module and local correction modules may use any set(s) of reference sources. For example, the local correction modules may use a strict subset of reference sources used by the global correction module (e.g., the subset of reference sources within a range threshold of a mobile receiver), or the global correction module may use a strict subset of reference sources used by the local correction modules (e.g., the subset of local reference sources with highest accuracy). As a second example, the local correction modules and global correction module may use overlapping reference sources (but neither set a subset of the other). As a third example, the local correction modules and global correction module may use non-overlapping sets of reference sources (i.e., they do not share a reference source). Likewise, these reference sources may receive satellite information from any set(s) of satellites. The output of the global correction module and local correction modules may be referred to as “pre-corrections” and may be generated in any form usable by the correction generator to generate correction data. Pre-corrections generated by the global correction module may be referred to as “global pre-corrections”, while pre-corrections generated by a local correction module may be referred to as “local pre-corrections”. In a variation of an invention embodiment, the global correction module includes a differenced ambiguity fixer (DAF)111that calculates carrier phase ambiguity for some reference station pairs. This differenced ambiguity fixer may be used, for instance, to help initialize new reference stations in global and local models more rapidly. Alternatively, the DAF111may be independent of the global correction module. The interpolator functions to interpolate spatially variant effects of the system. In particular, the interpolator preferably functions to transform per reference station models of local tropospheric and ionospheric effects into a local (reference-station independent, but position dependent) model of local tropospheric and ionospheric effects. For example, the interpolator may transform a set of tropospheric effect models corresponding to individual reference locations (each having a known position) to a regularly spaced grid. Additionally or alternatively, the interpolator may function in any manner (e.g., by creating a continuous interpolated model of tropospheric/ionospheric effects rather than a discrete grid, or using some other arrangement of discrete points than a grid pattern, as shown inFIG.11, etc.). Note that any interpolation technique may be used; for example, kriging may be used (this technique has the advantage of also predicting uncertainty at the interpolated points). In general, the local position dependent model may be referred to as a “unified position-based model” (since it unifies the output of multiple models corresponding to individual reference sources). The interpolator may additionally or alternatively function to separate ionospheric effects and effects of hardware bias; for example, local correction modules may output to the interpolator both ionospheric and hardware bias estimates (optionally including a term characterizing the correlation between these estimates), and from these estimates attempt to fit a unified (spatially variant) ionospheric model to the data (after which hardware bias estimates for each reference source may be refined). For example, each local correction module (LCMi) may output an ionospheric correction Ii(x,y,z) and a hardware bias correction Hi. At the local correction module these corrections may be improperly separated; e.g., Ii=Ii(ideal)+Δi, Hi=Hi(ideal)−Δi, but because the ionospheric estimates should fit the same model, the interpolator can use measurements from multiple reference sources to refine estimates of both ionospheric correction and hardware bias correction. The correction generator functions to generate corrections to be used by the mobile receiver. The correction generator preferably generates corrections based on output of the global correction module and the local correction modules for a mobile receiver in a form usable by the mobile receiver. For example, if the mobile receiver can accept PPP corrections, the correction generator may send corrections in the form of PPP corrections (though, in contrast to true PPP corrections, the corrections generated by the correction generator may depend upon receiver position estimate or another spatial term). Additionally or alternatively, the correction generator may send corrections in the form of RTK corrections (e.g., of a virtual reference station), or in any other form (e.g., some sort of local coefficients that are part of a local model). Note that local and global corrections may happen in any order (and may be synchronous or asynchronous). The correction generator may additionally or alternatively send or use correction data in any manner to correct position data (e.g., the correction generator may take as input position data and generate corrected position data rather than a positioning correction to be implemented by, say, a mobile receiver). The correction generator preferably caches model output and generates corrections using this cache. Accordingly, in the absence of some real-time data, cached data may be substituted (not possible in traditional RTK). Additionally or alternatively, new parameters may be estimated based on a predicted variation in time (e.g., predicted from cached values), or the correction generator may not rely on cached and/or predicted outputs. The correction generator may additionally or alternatively calculate estimated uncertainty in the generated corrections given uncertainty in the input parameters (traditional PPP/RTK solutions are not capable of doing this). A method for distributed dense network processing of satellite positioning data includes receiving data from a set of reference stations, updating a global GNSS correction model, updating a set of local GNSS correction models, and generating GNSS corrections. The method preferably functions in a substantially similar manner to the system. Receiving data from a set of reference stations functions to receive input data used to update global and local GNSS correction models, substantially similar to as described in the system. Updating a global GNSS correction model functions to update a global correction model substantially similar to that of the global correction module; this model preferably accomplishes two functions—correcting for global error (i.e., error in GNSS positioning that does not vary substantially in space) and error-checking/seeding local error estimates. Updates are preferably performed as described in the system description. Updating a set of local GNSS correction models functions to update local correction models substantially similar to that of the local correction module; these models preferably correct for spatially local variance of effects on GNSS signals as well as for effects that are specific to particular receivers/reference stations. Updates are preferably performed as described in the system description. Generating GNSS corrections functions to generate corrections from global and local correction models to be used by a mobile receiver. Generating GNSS corrections preferably includes generating corrections as described in the sections on the correction generator; additionally or alternatively, as part of this generation process, may include performing interpolation (as described in the section on the interpolator). Likewise, may include caching model output and generating corrections from this cache. Accordingly, in the absence of some real-time data, cached data may be substituted (this is not possible in traditional RTK). The method is preferably implemented by the system but may additionally or alternatively be implemented by any system for distributed dense network processing of satellite position data. Specific Example of a System and Method for Reduced-Outlier Satellite Positioning The position estimate of S240is preferably calculated by any number of prediction and update steps based on the observations received in S210. For example, S210may include receiving observations at different times, and S240may include generating a position estimate using all of those observations and a previous position estimate. Alternatively, S240may include generating a position estimate from only a subset of the observations. S250includes generating an outlier-reduced second receiver position estimate. S250functions to detect the effect of erroneous observations (i.e., erroneous observations detectable as statistical outliers) in the first receiver position estimate and modify the position estimate to increase accuracy (generating the second receiver position estimate, characterized by higher performance). While techniques for removing or weighting measurement outliers exist in the prior art (as well as analysis of solution or measurement quality based on residuals), S250includes specific techniques that may more efficiently mitigate the effect of outliers than existing techniques. For example, while techniques exist for mitigating for a single outlier at a time, the techniques of S250may lend themselves to identifying and/or mitigating for multiple outliers in parallel. S250preferably detects outlier observations using one of the three following techniques (scaled residual technique, variance threshold technique, and hybrid technique). After detecting outlier observations, S250preferably includes generating the second position estimate in the same manner as in S240, but excluding any outlier observations. Additionally or alternatively, S250may include generating the second position estimate by adding new observations with negative variances as updates to the first position estimate (the new observations serving to remove the effects of detected outlier observations), or in any other manner. While these are two examples of how S250may mitigate effects of outliers on position estimates, S250may additionally or alternatively accomplish this in any manner (e.g., weighting non-outlier observations more strongly than outlier observations). Scaled Residual Technique In a first implementation of an invention embodiment, S250includes generating an outlier-reduced second receiver position estimate using the scaled residual technique described in this section. Note that the term “scaled residual technique” is here coined to refer to exactly the technique described herein (any similarity in name to other techniques is purely coincidental). In the scaled residual technique, S250preferably includes calculating posterior residual values for the satellite data observations. That is, for observations zkand posterior state estimate {circumflex over (x)}k|k(calculated in S220), S250preferably includes calculating the residual {tilde over (v)}k|k=zk−Hk{circumflex over (x)}k|k henceforth referred to as the posterior observation residual (sometimes referred to as the measurement post-fit residual). From the posterior observation residual, S250preferably includes calculating the posterior observation residual covariance, Ck=Rk−HkPk|kHkT where Rkis the covariance of nkand Pk|kis the updated state covariance. From the posterior observation residual covariance, the variance of the posterior observation residual vector can be calculated: σ2=vT⁢Rk-1⁢vDOF where DOF is degrees of freedom. Note that v may be written as Sz where S is an matrix having a trace equivalent to the DOF. From this, it can be said that S=I−HkPk|kHkHRk−1 Finally, this variance can be used to scale the residuals {tilde over (v)}k|k(e.g., by dividing residuals by their associated standard deviations or by their associated variances). The scaled residuals are then compared to a threshold window (e.g., one corresponding to plus or minus 3 standard deviations from the mean), and any observations falling outside the threshold window are flagged as outlier observations. The second receiver position state is then generated from the reduced set of observations as described previously. Variance Threshold Technique In a second implementation of an invention embodiment, S250includes generating an outlier-reduced second receiver position estimate using the variance threshold technique described in this section. Note that the term “variance threshold technique” is here coined to refer to exactly the technique described herein (any similarity in name to other techniques is purely coincidental). In the variance threshold technique, the posterior residual, posterior residual covariance, and posterior residual variance are calculated as in the scaled residual technique. However, in this technique, the posterior residual variances are examined directly. If one or more posterior residual variances is outside of a threshold range, this is an indication that outliers may be present in the observation data. In this technique, S250preferably includes removing a set of observations and recalculating the posterior residual variances. If the posterior residual variances fall below threshold levels, the algorithm may stop here; however, the algorithm may alternatively try removing a different set of observations (and so on, until at least one or more of them falls below threshold levels). Alternatively stated, the algorithm may continue until the number of posterior residual variances outside of a threshold range is less than a threshold number. Alternatively, in this technique, S250may include calculating posterior residual variances for a number of set-reduced observations (i.e., different subsets of the whole set of observations) and choosing the reduced set with the lowest variance. This technique may be particularly useful for differenced measurements. Differenced measurements are correlated, and thus more likely to result in an outlier in one observation corrupting residuals that correspond to different observations. The second receiver position state is then generated from the reduced set of observations as described previously. Hybrid Technique In a third implementation of an invention embodiment, S250includes generating an outlier-reduced second receiver position estimate using the hybrid technique described in this section. Note that the term “hybrid technique” is here coined to refer to exactly the technique described herein (any similarity in name to other techniques is purely coincidental). In the hybrid technique, the posterior residual, posterior residual covariance, and posterior residual variance are calculated as in the scaled residual technique. Then, the posterior residual variances are examined. If one or more posterior residual variances is above a threshold (note: this can be a different threshold than the one mentioned in the variance threshold technique), S250includes detecting outliers using the variance threshold technique; however, if not, S250includes detecting outliers using the scaled variance technique. Additionally or alternatively, S250may include selecting between the variance threshold and scaled variance techniques in any manner based on the number of above-threshold posterior residual variances and/or their magnitude. The second receiver position state is then generated from the reduced set of observations as described previously. All three of these techniques preferably treat phase ambiguity as a continuous variable; however, S250may additionally or alternatively attempt to constrain phase ambiguity to an integer. For example, S250may include (e.g., after calculating a second position estimate) calculating phase measurement residuals and comparing those residuals to integer multiples of full phase cycles (e.g., 2πn). If the residual is close, this may be indicative of a cycle slip, rather than an erroneous observation. In one implementation of an invention embodiment, S250includes detecting a potential cycle slip, verifying that the value of the cycle slip can be chosen reliably (e.g., by verifying that only a single integer cycle slip value is contained within a known window of variance around the value of the residual), and testing the cycle slip value against the residual (e.g., by verifying that the cycle slip value is within a window of variance of the residual value). Note that the two windows of variance described here may be distinct (e.g., one may be smaller than the other). S250may then include correcting for the cycle slip. Note that if the method200identifies data from one or more sources (e.g., satellites, base stations) as erroneous, the method200may include flagging or otherwise providing notification that said sources may be “unhealthy”. Further, the method200may disregard or weight differently observations from these sources. The method200is preferably implemented by the system100but may additionally or alternatively be implemented by any system for processing of satellite position data. The methods of the preferred embodiment and variations thereof can be embodied and/or implemented at least in part as a machine configured to receive a computer-readable medium storing computer-readable instructions. The instructions are preferably executed by computer-executable components preferably integrated with a system for high-integrity satellite positioning. The computer-readable medium can be stored on any suitable computer-readable media such as RAMs, ROMs, flash memory, EEPROMs, optical devices (CD or DVD), hard drives, floppy drives, or any suitable device. The computer-executable component is preferably a general or application specific processor, but any suitable dedicated hardware or hardware/firmware combination device can alternatively or additionally execute the instructions. As a person skilled in the art will recognize from the previous detailed description and from the figures and claims, modifications and changes can be made to the preferred embodiments of the invention without departing from the scope of this invention defined in the following claims.
175,508
11860261
DETAILED DESCRIPTION The present disclosure will be described with reference to the accompanying drawings. The following Detailed Description refers to accompanying drawings to illustrate exemplary embodiments consistent with the disclosure. References in the Detailed Description to “one exemplary embodiment,” “an exemplary embodiment,” “an example embodiment,” etc., indicate that the exemplary embodiment described may include a particular feature, structure, or characteristic, but every exemplary embodiment does not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases do not necessarily refer to the same exemplary embodiment. Further, when the disclosure describes a particular feature, structure, or characteristic in connection with an exemplary embodiment, those skilled in the relevant arts will know how to affect such feature, structure, or characteristic in connection with other exemplary embodiments, whether or not explicitly described. The exemplary embodiments described herein provide illustrative examples and are not limiting. Other exemplary embodiments are possible, and modifications may be made to the exemplary embodiments within the spirit and scope of the disclosure. Therefore, the Detailed Description does not limit the disclosure. Rather, only the below claims and their equivalents define the scope of the disclosure. Hardware (e.g., circuits), firmware, software, or any combination thereof may be used to achieve the embodiments. Embodiments may also be implemented as instructions stored on a machine-readable medium and read and executed by one or more processors. A machine-readable medium includes any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computing device). For example, in some embodiments, a machine-readable medium includes read-only memory (ROM); random-access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; electrical, optical, acoustical or other forms of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.), and others. Further, firmware, software, routines, instructions may be described herein as performing certain actions. However, it should be appreciated that such descriptions are merely for convenience and that the actions result from computing devices, processors, controllers, or other devices executing the firmware, software, routines, and/or instructions. Any reference to the term “module” shall be understood to include at least one of software, firmware, and hardware (such as one or more circuit, microchip, or device, or any combination thereof), and any combination thereof. Besides, those skilled in relevant arts will understand that each module may include one or more than one component within an actual device, and each component that forms a part of the described module may function either cooperatively or independently of any other component forming a part of the module. Conversely, multiple modules described herein may represent a single component within an actual device. Further, components within a module may be in a single device or distributed among multiple devices in a wired or wireless manner. The following Detailed Description of the exemplary embodiments will fully reveal the general nature of the disclosure so that others can, by applying knowledge of those skilled in relevant arts, readily modify and/or customize for various applications such exemplary embodiments, without undue experimentation and without departing from the spirit and scope of the disclosure. Therefore, such modifications fall within the meaning and plurality of equivalents of the exemplary embodiments based upon the teaching and guidance presented herein. Here, the phraseology or terminology serves the purpose of description, not limitation, such that the terminology or phraseology of the present specification should be interpreted by those skilled in relevant arts in light of the teachings herein. FIG.1depicts an example environment in accordance with some embodiments. The example environment ofFIG.1corresponds with an indoor positioning system or a real-time location system for indoor tracking. As shown inFIG.1, three floors120,122, and124inside a building100is shown. On each floor, one or more communication devices are installed. By way of a non-limiting example, three communication devices102,104, and106are installed on the floor120such that the communication device104and/or the communication device106may receive a signal transmitted by the communication device102. Similarly, the communication device102and/or the communication device104may receive a signal transmitted by the communication device106, and the communication device102and/or the communication device106may receive a signal transmitted by the communication device104. As shown inFIG.1, three communication devices108,110, and112are installed on the floor122, and three communication devices114,116, and116are installed on the floor124. In some embodiments, the communication devices102,104,106,108,110,112,114,116, and118may be base stations that may transmit their location information in an acoustic signal, an infrared signal, an ultra-wideband signal, and/or a radio frequency signal. One or more portable tags or user equipment (UE) devices such as a smartphone, a mobile phone, a laptop, a tablet, a personal computer, etc., may receive the location information transmitted from any of the communication devices102,104,106,108,110,112,114,116, and118, and transmit data derived from received location information along with tag identifying information to a server via one or more access points. The server may then determine the location of a portable tag using information received from the portable tag via the one or more access points. By way of a non-limiting example, the one or more portable tags or UE devices may determine location of the one or more portable tags or UE devices using signals received from one or more communications devices102,104,106,108,110,112,114,116, and118. In some embodiments, by way of a non-limiting example, the communication devises102,104,106,108,110,112,114,116, and118may include a infrared transceiver to transmit and receive a infrared signal. As described above, the communication devices102,104,106,108,110,112,114,116, and118may transmit their location information as a broadcast signal at a precise time such that other communication devices including base stations, portable tags, stationary tags, mobile phones, smartphones, laptops, etc., can deduce the precise time at which the communication device, for example, communication device102, may broadcast a data signal that may include location information of the communication device. The precise time at which the communication device broadcasts the data signal containing its location information may be deduced based on when other units send their BLE or acoustic messages including sync-time, which is a time 0 within a second if the signals are sent every second. Each communication device in the communication system is synchronized for the sync-time of other communication devices in the communication system, as described in detail below, such that each communication device may have the same sync-time when measured against an external clock, e.g., a global positioning system (GPS) clock. However, in the embodiments, as described herein, each communication device of the communication system is synchronized without any additional hardware required for synchronization using the GPS clock. In some embodiments, each communication device in the communication system has an internal clock that may have a resolution better than 1 ms. In most cases, a real-time crystal is used with a frequency of 32768 Hz in combination with a counter. The internal clock reports clock tick nRTCsince startup within the confines of the physical hardware unit to any software service that might require it. Generally, there may be a delay between the registration of the clock count and the association of the registration with another event. In embedded hardware units, such delays are deterministic and can be measured to compensate them. The internal clock may experience a drift with respect to universal time, aRTC. With respect to a reference time t (e.g., simulation or universal time), the clock count is typically represented as nRTC=aRTC×t+bRTC, where bRTCrepresents a constant drift. This linearized approach may be applicable over a finite timespan, typically of the order of minutes or hours, depending on environmental conditions and the hardware used. Further, nRTCis an integer, and, therefore, suffers from rounding issues and wrapping. Accordingly, synchronization of the internal clock may be achieved by messaging the other communication devices for the offset and drift of the internal clock such that the other communication devices may update their offset and drift so that all the communication devices in the communication system may have their internal clocks synchronized. In some embodiments, by way of a non-limiting example, the communication devices102,104,106,108,110,112,114,116, and118each may transmit a message using a radio frequency signal, such as a Bluetooth Low Energy (BLE) advertising identifiers (IDs). The communication device may broadcast the BLE advertising IDs at a preconfigured time interval. The preconfigured time interval may be referred to in this disclosure as a sync interval. By way of a non-limiting example, the sync interval maybe 100 milliseconds, 1 second, or 5 seconds, etc. In some embodiments, the BLE advertising ID may include an identification of the communication device broadcasting the BLE advertising ID and timing information that indicates a sync-time of the communication device broadcasting the BLE advertising ID. By way of a non-limiting example, the BLE advertising ID may be according to the Eddystone format that is an open beacon format developed by Google. The timing information in the BLE advertising ID, for example, maybe a number between 0-32768, which corresponds to a number of ticks of a clock of 32768 Hz frequency since the sync-time of the communication device that is broadcasting the BLE advertising ID. By way of a non-limiting example, the timing information in the BLE advertising ID may be the current time, an offset, or drift with respect to the sync-time of the communication device broadcasting the BLE advertising ID. Accordingly, when other communication devices receive the BLE advertising ID broadcast from the communication device, the other communication devices may calculate the sync-time of the communication device with respect to their internal clocks, and adjusts their sync-times. In some embodiments, the offset may be expressed as physical real-time clock (RTC) counts or as virtual clock counts. The virtual clock counts may be corrected for the relative drift of the clock of the communication device broadcasting the BLE advertising ID. By way of a non-limiting example, the communication devices102,104, and106each are within the communication zone of each other. Accordingly, the sync-time of the communication device102may be synchronized with the sync-times of the communication devices104and106, as described herein. Similarly, the sync-time of the communication device104may be synchronized with the sync-times of the communication devices102and106, and the sync-time of the communication device106may be synchronized with the sync-times of the communication devices102and104. This process of synchronizing the sync-time is described below with reference to the communication devices102and104. In some embodiments, for example, the communication device102may receive a BLE advertising ID broadcast from the communication device104. The BLE advertising ID from the communication device104may include the timing information as described above in addition to the identification of the communication device104. To determine sync-time of the communication device104with respect to an internal clock and internal sync-time of the communication device102, the communication device102may determine a local time of the internal clock at which the communication device102received the BLE advertising ID. The communication device102may then determine a timestamp of the BLE advertising ID and the sync-time of the communication device104included in the BLE advertising ID. Based on the local time of the internal clock at which the communication device102received the BLE advertising ID from the communication device104, the timestamp of the BLE advertising ID from the communication device104, and the sync-time of the communication device104included in the BLE advertising ID, the communication device102may calculate how many ticks of the internal clock of the communication device102ago, the sync-time of the communication device104may have occurred. The communication device102may then adjust its own sync-time based on the mapped sync-time of the communication device104with respect to its own internal clock. By way of a non-limiting example, the communication device102may adjust its own sync-time to occur concurrently with the communication device104. In some cases, the communication device102may adjust its sync-time to occur at a preconfigured number of ticks apart from the sync-time of the communication device104. In some embodiments, the communication device102may listen for the BLE advertising ID from other communication devices, for example, the communication device104, before the communication device102may broadcast a BLE advertising ID. The communication device102may wait for a predetermined time interval, for example, up to 10 seconds, before broadcasting the BLE advertising ID including a sync-time of the communication device102and a device identifier of the communication device102. If the communication device102does not receive a BLE advertising ID within the predetermined time interval, the communication device may broadcast the BLE advertising ID. In some cases, the communication device102may broadcast the BLE advertising ID after the communication device102may adjust its sync-time with reference to one or more communication devices, for example, the communication device104, etc. The standard BLE advertising ID packet generally has very little room for extra data to be transmitted. Besides, sending more data on the air requires additional battery power of the communication device sending data. Accordingly, to save the battery power of the communication device, the communication device may be configured to send the BLE advertising ID at the sync-time of the communication device. Accordingly, the timing information in the BLE advertising ID may not include the current time, the offset, the drift, and/or the number of ticks of the clock of 32768 Hz frequency since the sync-time of the communication device that is broadcasting the BLE advertising ID. Rather, the communication device may broadcast the BLE advertising ID at the sync-time of the communication device, and the communication devices thus receiving the BLE advertising ID may deduce the sync-time based on the timestamp at which the BLE advertising ID was broadcasted. By way of a non-limiting example, the communication device may send subsequent BLE advertising IDs at a sync-time plus configurable interval. The configurable interval may increase for each subsequent broadcast of the BLE advertising ID. For example, the configurable interval for one BLE advertising ID may be 10 ms, the configurable interval for the next BLE advertising ID may be 20 ms, and the configurable interval for the next BLE advertising ID may be 30 ms, and so, until the configurable interval reaches a value of 50 ms. Once the configurable interval reaches the value of 50 ms, the next BLE advertising ID may be sent with the configurable interval of 0, and then again follows as described above. Accordingly, the communication device may save battery power and may avoid packet collisions with other communication devices. In some embodiments, the communication device may determine the total number of neighbor communication devices. The communication device may then determine a time window to send the BLE advertising ID based on the total number of neighbor communication devices. For example, the communication device may select the time window to send the BLE advertising ID as 10 ms if the total number of neighbor communication devices are four but may select the time window to send the BLE advertising ID as 100 ms if the total number of neighbor communication devices are 100. Within the selected time window, the communication device may transmit the BLE advertising ID comprising the sync offset. Since a clock of the communication device continuously drifts away, the communication device102may listen again for the BLE advertising IDs from the other communication devices, for example, the communication device104and/or the communication device106, to adjust sync-time of the communication device102. Thus, the communication device may adjust the sync-time of the communication device102with reference to the sync-time of the communication device104and the sync-time of the communication device106. Accordingly, within few seconds or minutes, for example, within 10 seconds or so, a consensus is reached between the communication devices102,104, and106regarding their sync-times such that the sync-times of the communication devices102,104, and106would be same when measured against an external clock, such as the GPS clock. However, the GPS clock is not used for synchronization of the sync-times of the communication devices102,104, and106. In some embodiments, after an initial adjustment or synchronization of the sync-time of the communication device is achieved with reference to other communication devices in the network, the communication device may wait for a longer duration before waking up to listen for the BLE advertising IDs from the other communication devices. The duration before which the communication device may wake up to listen again for the BLE advertising IDs may vary based on the confidence of the communication device in its clock drift and adjustment. In some embodiments, the communication device may be configured to synchronize its sync-time with reference to some communication devices only. For example, the communication devices102,104, and106each may be within communication range of one other. The communication device102may also receive signals from the communication device108, but the communication device102may be configured not to synchronize its sync-time with the communication device108. In some embodiments, if the communication device determines that the sync-time of a particular communication device drifts outside of an allowed clock drift range, the communication device may put the specific communication device in a blacklist and may not synchronize its sync-time with the specific communication device in the blacklist. The process or algorithm of synchronizing the sync-time of the communication device102with reference to the sync-time of the communication device104is simple. However, in the communication system, there are many communication devices. Accordingly, various algorithms may be used to synchronize the sync-times of communication devices. Various algorithms that may be used for synchronizing the sync-times of the communication devices are described with reference to an example environment, as shownFIG.2. InFIG.2, a floor or a hallway200with communication devices202,204,206, and208are shown that are similar to the communication devices shown inFIG.1. The communication device202may be in a communication range of the communication device204. The communication device204may be in a communication range of the communication devices202and206. The communication device206may be in a communication range of the communication devices204and208, and the communication device208may be in a communication range of the communication device206. In some embodiments, the communication device204, which is in the communication range of the communication device202and the communication device206each, the communication device204may receive the BLE advertising ID from the communication device202and the communication device206. The communication device204then adjusts its sync-time based on an average of an offset and drift of the communication device202and the communication device206. Accordingly, a virtual clock of the communication device202is based on an average offset and drift of all communication devices in the communication system; this algorithm may be referenced as an average algorithm. Even though, the communication device204is not within the communication range of the communication device208, because the sync-time of the communication device206is affected based on the offset and drift of the communication devices208and204, in the average algorithm, the virtual clock of the communication device204is directly or indirectly affected from the offset and drift of all communication devices. Accordingly, the sync-time of each communication device of the communication system will converge towards the same virtual clock. In some embodiments, the sync-time may be adjusted using an algorithm similar to the average algorithm. However, the sync-times of the communication devices that are farther away are given more weightage compared to the sync-times of the communication devices that are closer to the sync-time of the communication device being adjusted. In comparison with the average algorithm, in the weighted average algorithm, the sync-time of each communication device of the communication system will converge towards the same virtual clock faster. In some embodiments, for example, if the sync-time of the communication device202is less than 500 ms after the sync-time of the communication device204and the sync-time of the communication device206is between 500 to 1000 ms after the sync-time of the communication device202, then the communication device202may be considered slowest unit when a sync-time interval is 1 second (1000 ms). The communication device204may then adjust its sync-time based on the sync-time of the communication device202rather than the communication device206. In some cases, the communication device204may adjust its sync-time based on the sync-time of the communication device206, which is the fastest unit as explained above, rather than the communication device202. As described above, the communication device204may repeat adjustment of its sync-time at a preconfigured time interval, which may be up to 60 seconds. This algorithm may be referred to in this disclosure as the slowest clock algorithm since the communication device adjusts its sync-time with reference to the slowest clock. In some embodiments, the communication device may randomly select the sync-time of another communication device to adjust its sync-time. By way of a non-limiting example, assuming the communication devices202,204,206, and208are each within a communication range of each other, then the communication device204may randomly choose the sync-time of the communication device202to adjust its sync-time. Since both the communication devices202and204now broadcast the same sync-time, the communication devices206may also select the sync-time broadcasted by the communication device202or204to adjust its sync-time. The communication device208may also then adjust its sync-time based on the sync-time of communication device202,204, or206. Accordingly, all communication devices may end up having the same sync-time. By way of a non-limiting example, the communication device may select a different communication device to adjust its sync-time each time the communication device listens to other communication devices at a predetermined time interval. This algorithm may be referenced in this disclosure as a random algorithm. The random algorithm is based on the assumption that in a communication system of many communication devices broadcasting the same sync-time offset, there is a high probability that other communication devices in the communication system may use the same sync-time offset to adjust their sync-time. In some embodiments, various algorithms described in this disclosure may all be run by the communication device when the communication device listens for other devices to adjust its sync-time. In some embodiments, the communication device may use a Kalman filter to estimate offset and drift based on the sync-time offset and drift information of the other communication devices received over time. The communication device may then adjust its sync-time based on the estimated offset and drift values. In some embodiments, the communication device may use artificial intelligence, including a neural network, to train a machine-learning algorithm to reduce the sync-time difference between various communication devices in the communication system. By way of a non-limiting example, the neural network may be a long short-term memory (LSTM) that may be used for classifying, processing, and predicting based on historical data. In some embodiments, the communication device may apply more than one algorithms to adjust the sync-time. The algorithms, for example, the average algorithm, the weighted average algorithm, the slowest clock algorithm, the random algorithm, the Kalman filter, the machine-learning algorithm, and/or other algorithms may be applied in any order and any combination thereof. In some embodiments, the communication device may not broadcast its sync-time offset and drift at all, but instead, update its sync-time based on the sync-time offset of the other communication devices. Accordingly, the communication device may save its battery power. In some embodiments, the communication device may broadcast its sync-time offset based on the status of the remaining battery power. The communication device may adapt the broadcasting of its sync-time according to the remaining battery power. In other words, the communication device may broadcast its sync-time at an increasing time interval based on the remaining battery power. In some embodiments, the communication device may not wake up for a longer period to listen for the sync-time update from the other communication devices after the communication device determines that its sync-time has reached a sufficient accuracy. The communication device may use the standard deviation of sync-time information received from the neighbor communication devices to determine its sync-time has reached a sufficient accuracy. The communication device may also choose to listen for a shorter period of time or to listen for fewer messages, for example, only one message, compared to when the communication device first powered-on. The communication device may then listen for a longer period of time or for more messages when the communication device determines its sync-time offset compared to sync-times of other communication devices is outside a range of permissible configurable threshold, for example, 50 to 100 microseconds. In some embodiments, the communication device may determine a time window for listening messages from neighbor communication devices based on a number of messages received from each of the neighbor communication devices and/or standard deviation or standard error of the received sync-time information from the neighbor communication devices. In some embodiments, if the standard deviation or standard error of the received sync-time information from a particular neighbor communication device is below a predetermined threshold value, e.g., a couple of clock ticks such as 10 clock ticks, the communication device may start estimation of the drift parameter and may increase the time interval between listening instances. In some embodiments, the standard error or standard deviation of sync-time information from the neighbor communication devices may be grouped based on various ranges of the standard error or standard deviation. Accordingly, outlier neighbor communication devices may be identified, and sensitivity to such outlier neighbor communication devices may be controlled. In some embodiments, the communication device may broadcast its sync-time within a time window of, for example, 200 ms. Then, during listening, if the communication device receives messages from a very few other communication devices, for example, less than five communication devices, the communication device may reduce its time window to, for example, 10 ms. The communication device then monitors if the other communication devices also reduce their transmit time windows. If the communication device determined that the other communication devices have reduced their transmit time windows, the communication device then reduces its listening period. However, during listening, if the communication device receives messages from more communication devices, for example, five or more communication devices, then the communication device may reduce its transmit and receive time windows, for example, to 30 ms. When the communication device detects one or more new communication devices during listening, the communication device may increase its transmit and receive time windows by 1 second. In other words, the communication device may adjust its transmit and receive time windows dynamically based on the number of other neighbor communication devices from which the communication device receives sync-time information. FIG.3depicts an example block diagram of a communication device in accordance with some embodiments. A communication device300may include a processor302, a memory304, a clock306, a radio frequency (RF) transceiver308coupled with an antenna312, and an acoustic transceiver310coupled with an acoustic transducer314. Even though only one processor is shown inFIG.3, the communication device may include more than one processor. The processor302may be communicatively coupled with the memory304. The memory304may be a random access memory (RAM), a hard-disk, etc. The memory304may store instructions that may be executed by the processor302to perform operations according to various embodiments as described herein. The clock306may have a frequency of 32768 Hz. The clock306may be battery powered and may have a very low power consumption. A resolution of the clock306may be about 30 microseconds (μs). The low resolution of 30 μs of the clock306may impact the accuracy of the sync-time of the communication device300. In some embodiments, to improve the accuracy of the sync-time of the communication device300, another high-frequency clock may be used to interpolate between ticks of the clock306when listening for messages from other communication devices or transmitting to other communication devices. Since the high-frequency clock normally when the communication device is listening for messages from other communication devices or transmitting to other communication devices, there may not be an adverse impact on the battery life of the communication device. The RF transceiver308coupled with the antenna312may be a radio frequency transceiver according to any of the Wi-Fi protocols, for example, the Institute of Electrical and Electronics Engineers (IEEE) 802.11a/b/g/n/ac/ax. The RF transceiver308may be according to IEEE 802.15.4, Bluetooth Low Energy (BLE), ZigBee, 3G, 4G, 5G, 6G, etc. The RF transceiver308may be according to any radio frequency standard/protocol for a low-rate wireless transmission. The acoustic transceiver310, coupled with the acoustic transducer314, may transmit data signals containing location information of the communication device300. FIG.4depicts an example flow-chart of method steps in accordance with some embodiments. The method steps described in a flow-chart400may be performed by the first communication device of a plurality of communication devices. At step402of the flow-chart400, the first communication device may receive at least one message from a second communication device of the plurality of communication devices over a preconfigured time duration. The at least one message from the second communication device may include first timing information that indicates a sync-time of the second communication device. The at least one message from the second communication may also include an identifier of the second communication device. As described above, the at least one communication message may be received via the antenna312and the RF transceiver308. In accordance with some embodiments, the first timing information may include at least one of a number of ticks since the sync-time of the second communication device, a time of transmission of the at least one message from the second communication device, and an offset since the sync-time of the second communication device. As described above, the second communication device may be configured to transmit the at least one message at its sync-time. Accordingly, the sync-time of the second communication device may be deduced by the communication device based on the transmission timestamp of the at least one message from the second communication device. In accordance with some embodiments, the at least one message may be received over the preconfigured time duration that may be at least 60 seconds. By way of a non-limiting example, the preconfigured time duration may be set to 3 seconds to receive one to three messages. In accordance with some embodiments, the at least one message may be compressed, and an average interval between the messages may be about 2-20 ms. In accordance with some embodiments, at step404, the first communication device may determine a first local time of a clock of the first communication device at which the at least one message from the second communication device is received. In accordance with some embodiments, at step406, the first communication device may determine the sync-time of the second communication device based on the first timing information. As described above, by way of a non-limiting example, the first timing information may be a timestamp at which the second communication device transmitted the at least one message. The second communication device may be configured to transmit the at least one message at the sync-time of the second communication device. Accordingly, based on the timestamp of the at least one message, the sync-time of the second communication device may be determined. By way of a non-limiting example, the second communication device may include the offset, the drift, and/or the number of ticks of the clock of 32768 Hz frequency since the sync-time of the second communication device in the timing information. Accordingly, the first communication device may determine the sync-time of the second communication device. In accordance with some embodiments, at step408, the first communication device may map the sync-time of the second communication device to a second local time of the clock of the first communication device based on the first local time, and the sync-time of the second communication device as determined at step406. The first communication device may calculate the second local time based on the clock of the first communication device by subtracting a known delay associated with the second communication device. The known delay may include a delay at the second communication device to transmit the at least one message since the sync-time of the second communication device and other communication delays, such as path delay, processing delay, etc. In accordance with some embodiments, at step410, the first communication device may update its sync-time based on the second local time. Accordingly, the first communication device may synchronize its sync-time with respect to the sync-time of the second communication device. In the communication system where the first communication device may also receive at least one message from another communication device, for example, a third communication device, the first communication device may synchronize its sync-time using one or more algorithms described in this disclosure. In some embodiments, the first communication device may store in the memory304historical data while various steps in accordance with various embodiments as described herein during the sync-time adjustment process. In some embodiments, the communication device may be a portable tag or a user equipment (UE) such as a smartphone, a mobile phone, a tablet, etc. Finally, various embodiments described in this disclosure may bring long-term stability in a complex real-time network. Various embodiments described in this disclosure may also help the communication devices to save their battery power.
37,007
11860262
DETAILED DESCRIPTION OF THE INVENTION The present invention, in some embodiments thereof, relates to RF radiation methods, devise and system such as microwave or millimeter-wave methods for imaging and/or modeling an object. More specifically, but not exclusively, the present invention relates to a system, device and methods for providing a 3D mechanical model, of objects such as 3D objects. Additionally, embodiments of the present invention provide 3D data, such as 3D image data or modeling on the internal structure of an object such as data of the inner sections or elements of the object for example data relating to nonmetallic objects materials, such as plastics, glass, wood, etc. In other words, while 3D modeling image data according to the prior art includes only the contour of the object, eliminating data relating to latescent parts of the object, the present invention methods and devices are configured to provide data on hidden parts of the object, such as internal structure of opaque objects. In some cases, device and methods according to the invention are configured to reproduce or model the surface and inner parts of an object wherein some of the object's parts are concealed and therefore may not be measured according to prior art devices or methods without damaging the copied object. According to some embodiments of the present invention, there is provided a system for constructing a 3D representation or image (e.g. visualization) of an object for example a virtual 3D representation (e.g. 3D image) of an object. The system comprises a plurality of transducers (e.g. electromagnetic transducers); a transmitter unit for applying RF (radio-frequency) signals to said electromagnetic transducer array; and a receiver unit for receiving a plurality of RF signals affected by said object from said electromagnetic transducers array; a Radio Frequency Signals Measurement Unit (RFSMU) configured to receive and measure said plurality of plurality of affected RF signals and provide RF data of the object based on the plurality of affected RF signals; and at least one processing unit configured to: process said RF data to identify the dielectric properties of said object and construct a 3D visualization (e.g. image) of said object. Specifically, the system comprises a sensing unit, the sensing unit comprises an antenna array comprising a plurality of antennas, the antennas are configured to radiate high frequency RF signals. The system further includes a transmitter module (sub-system), a receiver module (sub-system) and at least one processing unit for processing the measured signals and constructing the 3D image of the object. The 3D image may comprise 3D representation of the surface and internal sections of the object. By augmenting the bistatic imaging with near field 3D imaging and modeling of the external and internal structure of multi layered complex bodies the present invention embodiments may provide even more accurate models which can be used for various applications, including, but not limited to, 3D printing and non-destructive testing (NDT). According to another embodiment of the invention the imaging system may include a number of electromagnetic mirrors for diversifying the viewing angles and imaging the object from multiple viewing angles. For example, scanning an object from a front hemisphere while having a mirror behind the object allows imaging the back side of the object as well. Reference is now made toFIG.1which is a schematic diagram illustrating a system100for constructing a 3D representation of an object110according to one embodiment of the invention. The system100comprises one or more sensors, specifically the system comprises an array of transducers for example one or more 3D multi-antenna arrays configurations120which surrounds an object110(hereinafter OUT or MUT or sample or material(s) or substance(s)). For example, the object may be hermetically surrounded by the array. The antenna array120comprises a plurality of antennas125. The antennas can be of many types known in the art, such as flat spiral antennas, printed log periodic antennas, sinuous antennas, patch antennas, multilayer antennas, waveguide antennas, dipole antennas, slot antennas, Vivaldi broadband antennas. The antenna array can be a MIMO or a linear or two-dimensional, flat or conformal to the region of interest. In some cases, the system100comprises a housing for holding the antenna array. For example, the housing may be a cage130, shaped as a spherical cage or other shape such as a cube for holding the antenna array and surrounding the object. The cage130comprises one or more arcs135for holding the antennas125. For example, the object may be hermetically surrounded by the housing. In some cases, the housing may include an opening for inserting the object to the cage and a holding for holding the object. The antenna array120may transfer a plurality of RF signals137propagating a wave into the cage130for constructing a 3D image of the object. The system100further includes a transmit/receive subsystem115configured to generate and transmit the RF signals, for example, from 10 MHz to 10 GHz, a Radio Frequency Signals Measurement Unit (RFSMU)120such as a Vector Network Analyzer (VNA) for measuring the received/reflected signals, a data acquisition subsystem150and further at least one processing unit160for processing the measured signals to provide an RF image and further a 3D visualization (e.g. one or more 3D images) of said object. The transmit/receive subsystem115is responsible for generation of the RF signals, coupling them to the antennas, reception of the RF signals from the antennas and converting them into a form suitable for acquisition. The signals can be pulse signals, stepped-frequency signals, chirp signals and the like. The generation circuitry can involve oscillators, synthesizers, mixers, or it can be based on pulse oriented circuits such as logic gates or step-recovery diodes. The conversion process can include down conversion, sampling, and the like. The conversion process typically includes averaging in the form of low-pass filtering, to improve the signal-to-noise ratios and to allow for lower sampling rates. According to some embodiments of the invention, the transmit/receive subsystem115may perform transmission and reception with multiple antennas at a time or select one transmit and one receive antenna at a time, according to a tradeoff between complexity and acquisition time. The data acquisition subsystem150collects and digitizes the signals from the transmit/receive subsystem115while tagging the signals according to the antenna combination used and the time at which the signals were collected. The data acquisition subsystem150will typically include analog-to-digital (A/D) converters and data buffers, but it may include additional functions such as signal averaging, correlation of waveforms with templates or converting signals between frequency and time domain. In an embodiment, the data acquisition subsystem150may include signal source/s, amplifiers, mixers, antennas, analog to digital converters, data transfer HW, memory, controller, power delivery hardware, and all other components required. The processing unit160is responsible for converting the collected RF signals into responses and merging other data such as image data received from optical sensors such as the camera or the ultrasound units, and converting the sets of RF responses and image data, into data to reconstruct a 3D image as will be described in details herein below. The processing unit160is usually implemented as a high-performance computing platform, based either on dedicated Digital Signal Processing (DSP) units, general purpose CPUs, or, according to newer trends, Graphical Processing Units (GPU). A final step in the process is making use of the resulting image, either in the form of visualization, display, storage, archiving, or input to feature detection algorithms. This step is exemplified inFIG.1as console165. The console may be part of a mobile device and is typically implemented as a handheld computer such as a mobile telephone or a table computer with appropriate application software. In some cases, the system100may include an optical device such as an optical camera161configured to image and model the contour of the object110(e.g. provide an optical image), and characterize the visual characteristics of the object, e.g., its colors. The contour obtained from the optical device (e.g. the camera161) may be fused or superposed with the contour of the RF image as provided by the antenna array to get a more precise model of the object. The camera may be a CCD or CMOS camera. For example, the constructed 3D image processed by the processing unit based on measurements and analysis of the RF signals reflected from the object may be merged with an external 3D contour obtained from the 2D or 3D images. In some cases, the superposition process may be utilized as part of a calibration process of the array. The visual information about the exterior of the 3D object may be used to refine the accuracy of the microwave imaging system. Examples for embodiments for a calibration process may be found in U.S. patent application Ser. No. 14/499,505, filed on Sep. 30, 2015 entitled “DEVICE AND METHOD FOR CALIBRATING ANTENNA ARRAY SYSTEMS” which application is incorporated by reference herein in its entirety. In some cases, system100may include ultrasound transducers170, and the processing unit may fuse the resulting reconstructed ultrasound image with the RF image and or the optical image. The merging process of the different type of images (e.g. the RF image and other type of images such as ultrasound image) may include an optimization step to improve and/or to optimize 3D image reconstruction time. The optimization may include variable resolution imaging, e.g., reconstructing a crude image, which is then further processed to enhance resolution only in these relevant regions. The basic principle of operation of the system100is as follows. RF signals are radiated by the transmit/receive subsystem115, via the antenna army120, and are further reflected by the object110or transmitted through it, and received by the transmit/receive subsystem115. At the next step the received signals are generated and received at RFSMU120. The received signals are then processed at the processing unit160resulting in a reconstructed 3D image. Once a reconstructed image is available, further processing is carried out in order to distill a 3D mechanical model of the contour as well as the internal structure of the multi-layered object as will be described in detail hereinbelow. Reference is now made toFIG.2illustrating a sphere-shaped transducer (e.g. antenna) army200covering a surface of a housing230, shaped as a ball, and surrounding an object210according to one embodiment of the invention. The housing shape may include, but is not limited to, a ball, cube, a cage and so on. The housing may be in the form of a spherical cage comprising a plurality of arcs220. The transducer (e.g. antenna) array200comprises a plurality of transducers (e.g. antennas)250(e.g. transmit and receive antennas) covering the housing's surface. In some cases, the plurality of antennas are attached along the housing's arcs. The antenna army200is configured to cover all bi-static angles of transmit and receive from multiple possible viewing angles of the object. It is stressed that covering multiple bi-static angles is crucial since many objects behave as “mirrors” for the relevant wavelengths, and therefore the energy is reflected to a localized angle in space. The antenna array topology as shown inFIG.2is configured to collect all Tx and Rx angles instantaneously. According to embodiments of the invention the antenna array is configured to receive signals reflected from the object as well as signals transmitted through the object. For example the signals may be received by an Rx antenna, and transferred through an analog path (e.g. amplifier, mixer, filter), sampled by the analog to digital converter, digitally processed (e.g. filters, weighting) of the processing unit and transferred to the next level. Reference is now made toFIG.3illustrating a system such as an imaging system300according to another embodiment of the invention. The system300comprises an antenna array320comprising a plurality of antennas (e.g. transmit and receive antennas) which may be mounted on a housing330, shaped for example as spherical ball or cage. The housing may include one or more arcs for holding the plurality of antennas and a driver and rotation unit380for rotating the housing and/or the arcs and/or the object along a rotation axis Y parallel to the arcs in respect to an X-, Y-, Z-axis Cartesian coordinate system. Specifically, the housing330may include two antenna arcs (e.g. antenna arc322and antenna arc324) comprising a plurality of antenna. For example, the two arcs, may comprise any angle of rotation in respect to axis Y while the object is static. In some cases, the arcs322and324may be at a distance al with respect to axis X along the housing330diameter. Advantageously, the rotation configuration as illustrated inFIG.3enables to reduce the required number of antennas in the antenna array by rotating the antenna array (e.g. arcs322and324) or the object. The rotation unit380may be controlled by controller which may determine the rotation speed of the antenna and also the object. For example, the housing may be rotated in a speed of several degrees or tens of degrees per second. In accordance with embodiments as illustrated inFIG.3, system300is configured to measure the object310from all bi-static angles while utilizing a small number of antennas (e.g. receiving and transmitting antennas), for example fewer transmitters and receivers antenna than the number of antennas included in system200shown inFIG.2(for example less than10antennas or only two antennas). However, the measurements provided by the system300are not instantaneous and the antennas ofFIG.3must be swept (e.g. rotated) over all positions and angles. By rotating the arcs (e.g. arc324or322) the system may cover multiple bi-static angles which result in all possible angles between the transmitter and receiver, and all transmitter to object angles. In one embodiment, the system300may include two rotation states for imaging or modelling the object310, in accordance with embodiments of the invention. In a first rotation state object310is rotated while the housing330and the antennas are in a fixed position, imaging the object. Alternatively, the object may be in a fixed position while the housing330may be rotated for example anticlockwise in respect to axis Y and imaging the object from all possible angels. Optionally, both the object and housing may be rotated synchronically for imaging the object from all possible angles.In a second rotation state the angle α1 between the arcs (e.g. arcs322and324) may be controlled, for example the first arc, such as arc322may be static, while the second arc (e.g.324) may be rotated to or from the first arc in respect to the Y axis.In some cases, the housing may be rotated in respect to the Y axis and or the X axis. According to some embodiments of the invention the system300may include more than two arcs, for example the system300may include a plurality of arcs, for example at a distance α1 between two consecutive arcs where α1 may be 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 or more degrees. In some cases, the system may include partial arcs, for example the system may include a single arc covering 180 degrees (e.g. half of spherical ball) or two arcs where each arc is configured to cover complementary 90 degrees, (both arcs covering the 180 degrees). Reference is now made toFIG.4illustrating a system400for imaging and further modelling an object410, in accordance with embodiments of the invention. The system400may comprise a limited number of antennas, for example only two antennas configured to cover all bi-static angles of the object. The system400comprises a housing430, shaped as spherical cage, and surrounding the object410. According to embodiments of the invention the antenna may be attached to the hosing. For example, may be in the form of a spherical cage comprising a plurality of arcs220, where each arc is configured to hold at least one antenna. In some cases, as shown inFIG.4, the imaging system400comprises a first antenna432and a second antenna434, where each antenna may be located at any point on the surface of the sphere. For example, the first antenna432may be attached to a first arc422while the second antenna434may be attached to a second arc424. In an embodiment both antennas, the first antenna and the second antenna, may slide up and down accordingly and synchronically while the sphere is rotated. Reference is now made toFIG.5Aillustrating a flowchart500of a method of 3D image reconstruction and modelling of an object using a RF device, in accordance with embodiments of the invention. At step510one or more RF signals, for example between 2 GHz and 9 GHz, are generated by one or more RF transducers (e.g. sensors or antenna), such as the antenna array attached to or placed on the housing comprising the object as illustrated inFIGS.1-4. The signals are emitted towards the object, preferably from all (or almost all) bi-static angles. The RF sensors may be any of the mentioned above sensors. At step520the multiple RF signals reflected or affected from or by the object and/or the scene surrounding the scene are obtained from all (or almost all) bi-static angles by the RF sensors and at step530the reflected or affected RF signals are measured, for example by the RFSMU120to obtain RF data of the object. In an embodiment, the emitted and affected signal of steps510and520are transmitted and obtained from a plurality of angles by rotating the object and/or the RF transducers as illustrated inFIGS.1-4. At step540a calibration process is carried out to tune the imaging system so as to maintain coherency of the signals throughout the frequency range, over the entire array, and over all the measurements (e.g. in the case of non-instantaneous measurements). The methods, system and apparatus disclosed herein are capable of calibrating an antenna or an antenna array, such as the array or antennas illustrated inFIGS.1-4, by utilizing one or more targets. The calibration process is required for example, for each pair of bi-static antennas and for each frequency. The methods and apparatus can be configured to measure the electronic delay and possible mismatch between the antennas and/or the electronics of the array or the device comprising the array, and possible mismatch between the antenna and the medium (object under test). The targets' properties used for calibrating the antenna array may be known, unknown or partially known. For example, the target may be any object which it's electromagnetic (EM) response may be measured, such as a metal ball. Methods and system according to embodiments of the invention include measuring the EM reflections of the target, located in a specific location in respect to the antenna array and analyzing the reflected EM signal to configure a separated EM transmit response (e.g. forward term) and receive response (e.g. reverse term) for each antenna of the antenna array. A further analysis process includes comparing (e.g. simulating) the calculated EM responses to a set of responses which should have been received and configuring the array's full complex EM response (e.g. the antennas EM responses reflected from the medium in time and frequency). In addition, accurate chip-level calibrations are required in order to guarantee the stability and linearity of the recorded signals. Examples for calibrating an antenna may be found in PCT Patent Application No. PCT/FL2016/050444 entitled SYSTEM AND METHODS FOR CALIBRATING AN ANTENNA ARRAY USING TARGETS which application is incorporated by reference herein in its entirety. At step550, a background removal process is applied on to remove unwanted interferences received at the antenna. At step560, the normalized RF signals are measured to obtain the dielectric properties of the object and identify the quantitative qualities of the object. The processing step may be activated for example by the processor unit and the Radio Frequency Signals Measurement Unit (RFSMU) connected to or in communication with the sensors as shown inFIG.1. Examples of methods for measuring the dielectric properties of an object and identifying the quantitative qualities of the object may be found in the present applicant patent applications and patents, for example PCT Application number PCT/IL2015/050126, filed Feb. 4, 2015, entitled “SYSTEM DEVISE AND METHOD FOR TESTING AN OBJECT”, PCT Application PCT/IL2015/050099, filed on Jan. 28, 2015, entitled “SENSORS FOR A PORTABLE DEVICE” and U.S. Pat. No. 8,494,615 filed Mar. 18, 2011 which applications and patent is incorporated by reference herein in its entirety. At step570a 3D image reconstruction process of the object is initiated by the processing unit160. In an embodiment, the reconstruction process includes analysis of the RF data (e.g. dielectric properties of the object) by the processing unit using processing methods such as delay and sum (DAS) methods. Specifically, according to embodiments of the invention, a 3D image of the object is reconstructed based on arbitrary antenna arrays, such as antenna army120ofFIG.1or the antennas ofFIGS.2-4, according to a DAS beamforming method. As follows: After having transmitted from all, or some, of the sensors and having received with all or some of the remaining sensors, the reflected or affected RF signals are converted to time domain. Let yij(t) denote the time domain signal obtained when transmitting from sensor i and receiving in sensor j. According to the DAS method, to obtain the image at point r in space, the signals are delayed according to the expected delay from the pair of sensors to point r, denoted Tij(r) and then summed, yielding Eq (1): IDAS(r)=Σijyij(Tij(r))  (1) The DAS algorithm requires adaptations to handle sensor radiation pattern and frequency responses of the various system components (RF elements, traces, sensors). In addition, the signal acquisition may be performed in time domain, or rather, it may be performed in frequency domain, over discontinuous frequency windows. Furthermore, possibly, every frequency region uses a different set of sensors and has a different gain and noise properties. The selection of these frequency windows comes hand in hand with the array design, where the angular diversity of the array compensates for the missing information in frequency, and vice versa. In general, coherent combining of the signals may be performed in frequency domain, with predefined weights according to Eq 2: Icoh(r)=ΣijΣfRe{wij(f;r)·Yij(f)}  (2) Where wij(f;r) is the complex weight given to frequency f in pair i→j when imaging point r. The value of wi,jis computed while taking into account the following considerations: The contribution of various sensor pairs/sub-arrays and different frequencies to the resolution. For example, if there are fewer low-frequency sensors, their signals will be amplified in order to balance their power and improve resolution. Compensation of the gain and frequency response of different frequency regions/windows. Path loss and lossy materials: weak signals that result from space/material loss are amplified, in general, as long as they are above the noise/clutter level. Known properties of the antennas/sensors (e.g. radiation pattern and frequency response) and of the path and the target (e.g. spatial and spectral response of a Rayleigh reflector). Adaptive imaging (such as capon beamforming) may be applied. In this case the weights wijare determined based on the measured signals, e.g. in order to optimize SNR or signal to clutter in a given location on the image Transparent Objects In some cases, the object such as object110or210ofFIGS.1-2, may include a specific structure, which requires a more challenging solution to construct a 3D image of these objects. Specifically, imaging objects comprising rigid man-made and half-transparent bodies, for example hollow objects made of glass or plastic. This is a result of these objects smooth surfaces which operate like mirrors at small wavelengths (i.e., where the wavelength is substantially smaller that the roughness of the surface). Unlike point scatters, most of the energy in these objects is reflected along a specific direction. In this case the resolution cannot be obtained by simply summing all sensor pairs (such as IDAS(r)), and different bi-static measurements of the surface have to be utilized. Furthermore, the object's rigid body distorts the signals obtained from deeper layers (e.g. cavities). Due to the high sensitivity of the system (e.g. such as system100) to the propagation velocity and the width of the materials between the imaged body and the antenna, the surface and in some cases the material properties have to be estimated prior to imaging of the internal layers. Image Improvements According to further embodiments of the invention, the number of different signals and viewing angles, and hence image quality and resolution of the object, can be improved by using additional imaging methods and devices such as Synthetic Aperture Radar (SAR) methods. According to SAR methods the array elements or the object may be moved, as illustrated inFIGS.1-4The information can be combined either at the signal level (i.e. extending Icohby adding synthetic pairs), or at the image level (i.e. by coherently combining images obtained from different angles). The location of the array can be estimated using one or more motion sensors and the signals themselves (e.g. with respect to a reference target or a reference antenna). However, the imaging complexity, using additional imaging methods such as SAR, is increased since the image resolution and the required number of sensors is increased in proportion to one over the wavelength. To cope with the increased imaging complexity several techniques may be applied. According to a first embodiment, the antenna array may be divided to subarrays, where the information from each subarray is processed separately by a separate processing unit or by a single processing unit such as unit160and then combined by the processing unit once the image is constructed at the image reconstruction step. Alternatively or on combination, a variable resolution imaging process may be used. The variable resolution imaging process comprises producing a low resolution image, for example a resolution of between and then identifying the “interesting” parts of the image and improving resolution only in those parts. In some cases, the ‘interesting parts’ are defined as such as and are located according to methods such. Furthermore, according to some embodiments of the invention as part of the step for reconstruction the image, the regularities in the antenna arrays may be utilized in order to reduce the number of computations (using FFT-like structures). In some cases, polarimetric information, obtained from cross- and co-polarized sensors may be utilized in order to estimate properties, of the object which are not visible by unipolar imaging, for example, structures and details which lie below the imaging resolution. In another embodiment, known imaging methods such as Capon or various adaptive beamforming methods may be utilized in order to improve image quality and/or signal to clutter ratio. The reconstructed image may be improved by an iterative process, which extracts important physical parameters, e.g., dielectric properties, and reuses them to improve the image. FIG.5Bis a flowchart of a method501of a method for 3D modelling of an object, in accordance with embodiments of the invention. Method501comprises all steps for constructing a 3D image as illustrated in flowchart500ofFIG.5A. Once the reconstructed image is obtained at step570, a 3D modelling of the object is obtained at step580, resulting in a mechanical 3D model of the object. According to one embodiment of the invention the external contour of the object is first modeled which follows by a “peel” of the external model and modeling the followed internal contour of the object such as in an onion peeling process, step by step until the inner parts of the object are completely modeled. The reconstruction stage includes a combination of transmission imaging, i.e., using the signals which passed through the object, and reflection imaging, i.e., the signals which are reflected back from the object. According to some embodiments, polarimetric data may be exploited as well. According to some embodiments, the object may be inserted into a high epsilon material, in order to improve resolution, resulting in an effective shorter wavelength than in air. Reference is now made toFIG.6illustrating a 3D cross section image of a solid opaque cup and a ball inside the cup where both the ball and the cup are visible. The present invention provides a system and method for modeling an object which includes providing a representation of the external and internal parts and parameters (e.g. width, volume etc.) of the object including for example elements which are inside the object such as the ball shown inFIG.6. The terms “comprises”, “comprising”, “includes”, “including”, “having” and their conjugates mean “including but not limited to”. This term encompasses the terms “consisting of” and “consisting essentially of”. As used herein, the singular form “a”, “an” and “the” include plural references unless the context clearly dictates otherwise. It is appreciated that certain features of the invention, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the invention, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable sub-combination or as suitable in any other described embodiment of the invention. Certain features described in the context of various embodiments are not to be considered essential features of those embodiments, unless the embodiment is inoperative without those elements. Although the invention has been described in conjunction with specific embodiments thereof, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art. Accordingly, it is intended to embrace all such alternatives, modifications and variations that fall within the spirit and broad scope of the appended claims. All publications, patents and patent applications mentioned in this specification are herein incorporated in their entirety by reference into the specification, to the same extent as if each individual publication, patent or patent application was specifically and individually indicated to be incorporated herein by reference. In addition, citation or identification of any reference in this application shall not be construed as an admission that such reference is available as prior art to the present invention. To the extent that section headings are used, they should not be construed as necessarily limiting. In further embodiments, the processing unit may be a digital processing device including one or more hardware central processing units (CPU) that carry out the device's functions. In still further embodiments, the digital processing device further comprises an operating system configured to perform executable instructions. In some embodiments, the digital processing device is optionally connected a computer network. In further embodiments, the digital processing device is optionally connected to the Internet such that it accesses the World Wide Web. In still further embodiments, the digital processing device is optionally connected to a cloud computing infrastructure. In other embodiments, the digital processing device is optionally connected to an intranet. In other embodiments, the digital processing device is optionally connected to a data storage device. In accordance with the description herein, suitable digital processing devices include, by way of non-limiting examples, server computers, desktop computers, laptop computers, notebook computers, sub-notebook computers, netbook computers, netpad computers, set-top computers, handheld computers, Internet appliances, mobile smartphones, tablet computers, personal digital assistants, video game consoles, and vehicles. Those of skill in the art will recognize that many smartphones are suitable for use in the system described herein. Those of skill in the art will also recognize that select televisions with optional computer network connectivity are suitable for use in the system described herein. Suitable tablet computers include those with booklet, slate, and convertible configurations, known to those of skill in the art. In some embodiments, the digital processing device includes an operating system configured to perform executable instructions. The operating system is, for example, software, including programs and data, which manages the device's hardware and provides services for execution of applications. Those of skill in the art will recognize that suitable server operating systems include, by way of non-limiting examples, FreeBSD, OpenBSD, NetBSD®, Linux, Apple® Mac OS X Server®, Oracle® Solaris®, Windows Server®, and Novell® NetWare®. Those of skill in the art will recognize that suitable personal computer operating systems include, by way of non-limiting examples, Microsoft® Windows®, Apple® Mac OS X®, UNIX®, and UNIX-like operating systems such as GNU/Linux®. In some embodiments, the operating system is provided by cloud computing. Those of skill in the art will also recognize that suitable mobile smart phone operating systems include, by way of non-limiting examples, Nokia® Symbian® OS, Apple® iOS®, Research In Motion® BlackBerry OS®, Google® Android®, Microsoft® Windows Phone® OS, Microsoft® Windows Mobile® OS, Linux®, and Palm® WebOS®. In some embodiments, the device includes a storage and/or memory device. The storage and/or memory device is one or more physical apparatuses used to store data or programs on a temporary or permanent basis. In some embodiments, the device is volatile memory and requires power to maintain stored information. In some embodiments, the device is non-volatile memory and retains stored information when the digital processing device is not powered. In further embodiments, the non-volatile memory comprises flash memory. In some embodiments, the non-volatile memory comprises dynamic random-access memory (DRAM). In some embodiments, the non-volatile memory comprises ferroelectric random access memory (FRAM). In some embodiments, the non-volatile memory comprises phase-change random access memory (PRAM). In other embodiments, the device is a storage device including, by way of non-limiting examples, CD-ROMs, DVDs, flash memory devices, magnetic disk drives, magnetic tapes drives, optical disk drives, and cloud computing based storage. In further embodiments, the storage and/or memory device is a combination of devices such as those disclosed herein. In some embodiments, the digital processing device includes a display to send visual information to a user. In some embodiments, the display is a cathode ray tube (CRT). In some embodiments, the display is a liquid crystal display (LCD). In further embodiments, the display is a thin film transistor liquid crystal display (TFT-LCD). In some embodiments, the display is an organic light emitting diode (OLED) display. In various further embodiments, on OLED display is a passive-matrix OLED (PMOLED) or active-matrix OLED (AMOLED) display. In some embodiments, the display is a plasma display. In other embodiments, the display is a video projector. In still further embodiments, the display is a combination of devices such as those disclosed herein. In some embodiments, the digital processing device includes an input device to receive information from a user. In some embodiments, the input device is a keyboard. In some embodiments, the input device is a pointing device including, by way of non-limiting examples, a mouse, trackball, track pad, joystick, game controller, or stylus. In some embodiments, the input device is a touch screen or a multi-touch screen. In other embodiments, the input device is a microphone to capture voice or other sound input. In other embodiments, the input device is a video camera to capture motion or visual input. In still further embodiments, the input device is a combination of devices such as those disclosed herein. In some embodiments, the system disclosed herein includes one or more non-transitory computer readable storage media encoded with a program including instructions executable by the operating system of an optionally networked digital processing device. In further embodiments, a computer readable storage medium is a tangible component of a digital processing device. In still further embodiments, a computer readable storage medium is optionally removable from a digital processing device. In some embodiments, a computer readable storage medium includes, by way of non-limiting examples, CD-ROMs. DVDs, flash memory devices, solid state memory, magnetic disk drives, magnetic tape drives, optical disk drives, cloud computing systems and services, and the like. In some cases, the program and instructions are permanently, substantially permanently, semi-permanently, or non-transitorily encoded on the media. In some embodiments, the system disclosed herein includes at least one computer program, or use of the same. A computer program includes a sequence of instructions, executable in the digital processing device's CPU, written to perform a specified task. Computer readable instructions may be implemented as program modules, such as functions, objects, Application Programming Interfaces (APIs), data structures, and the like, that perform particular tasks or implement particular abstract data types. In light of the disclosure provided herein, those of skill in the art will recognize that a computer program may be written in various versions of various languages. The functionality of the computer readable instructions may be combined or distributed as desired in various environments. In some embodiments, a computer program comprises one sequence of instructions. In some embodiments, a computer program comprises a plurality of sequences of instructions. In some embodiments, a computer program is provided from one location. In other embodiments, a computer program is provided from a plurality of locations. In various embodiments, a computer program includes one or more software modules. In various embodiments, a computer program includes, in part or in whole, one or more web applications, one or more mobile applications, one or more standalone applications, one or more web browser plug-ins, extensions, add-ins, or add-ons, or combinations thereof. In some embodiments, a computer program includes a mobile application provided to a mobile digital processing device. In some embodiments, the mobile application is provided to a mobile digital processing device at the time it is manufactured. In other embodiments, the mobile application is provided to a mobile digital processing device via the computer network described herein. In view of the disclosure provided herein, a mobile application is created by techniques known to those of skill in the art using hardware, languages, and development environments known to the art. Those of skill in the art will recognize that mobile applications are written in several languages. Suitable programming languages include, by way of non-limiting examples, C, C++, C#, Objective-C, Java™, Javascript, Pascal, Object Pascal, Python™, Ruby, VB.NET, WML, and XHTML/HTML with or without CSS, or combinations thereof. Suitable mobile application development environments are available from several sources. Commercially available development environments include, by way of non-limiting examples, AirplaySDK, alcheMo, Appcelerator®, Celsius, Bedrock, Flash Lite, NET Compact Framework, Rhomobile, and WorkLight Mobile Platform. Other development environments are available without cost including, by way of non-limiting examples, Lazarus, MobiFlex, MoSync, and Phonegap. Also, mobile device manufacturers distribute software developer kits including, by way of non-limiting examples, iPhone and iPad (iOS) SDK, Android™ SDK, BlackBerry® SDK, BREW SDK, Palm® OS SDK, Symbian SDK, webOS SDK, and Windows® Mobile SDK. Those of skill in the art will recognize that several commercial forums are available for distribution of mobile applications including, by way of non-limiting examples, Apple® App Store, Android™ Market, BlackBerry® App World, App Store for Palm devices, App Catalog for webOS, Windows® Marketplace for Mobile, Ovi Store for Nokia® devices, Samsung® Apps, and Nintendo® DSi Shop. In some embodiments, the system disclosed herein includes software, server, and/or database modules, or use of the same. In view of the disclosure provided herein, software modules are created by techniques known to those of skill in the art using machines, software, and languages known to the art. The software modules disclosed herein are implemented in a multitude of ways. In various embodiments, a software module comprises a file, a section of code, a programming object, a programming structure, or combinations thereof. In further various embodiments, a software module comprises a plurality of files, a plurality of sections of code, a plurality of programming objects, a plurality of programming structures, or combinations thereof. In various embodiments, the one or more software modules comprise, by way of non-limiting examples, a web application, a mobile application, and a standalone application. In some embodiments, software modules are in one computer program or application. In other embodiments, software modules are in more than one computer program or application. In some embodiments, software modules are hosted on one machine. In other embodiments, software modules are hosted on more than one machine. In further embodiments, software modules are hosted on cloud computing platforms. In some embodiments, software modules are hosted on one or more machines in one location. In other embodiments, software modules are hosted on one or more machines in more than one location. In some embodiments, the system disclosed herein includes one or more databases, or use of the same. In view of the disclosure provided herein, those of skill in the art will recognize that many databases are suitable for storage and retrieval of information as described herein. In various embodiments, suitable databases include, by way of non-limiting examples, relational databases, non-relational databases, object oriented databases, object databases, entity-relationship model databases, associative databases, and XML databases. In some embodiments, a database is internet-based. In further embodiments, a database is web-based. In still further embodiments, a database is cloud computing-based. In other embodiments, a database is based on one or more local computer storage devices. In the above description, an embodiment is an example or implementation of the inventions. The various appearances of “one embodiment,” “an embodiment” or “some embodiments” do not necessarily all refer to the same embodiments. Although various features of the invention may be described in the context of a single embodiment, the features may also be provided separately or in any suitable combination. Conversely, although the invention may be described herein in the context of separate embodiments for clarity, the invention may also be implemented in a single embodiment. Reference in the specification to “some embodiments”, “an embodiment”, “one embodiment” or “other embodiments” means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least some embodiments, but not necessarily all embodiments, of the inventions. It is to be understood that the phraseology and terminology employed herein is not to be construed as limiting and are for descriptive purpose only. The principles and uses of the teachings of the present invention may be better understood with reference to the accompanying description, figures and examples. It is to be understood that the details set forth herein do not construe a limitation to an application of the invention. Furthermore, it is to be understood that the invention can be carried out or practiced in various ways and that the invention can be implemented in embodiments other than the ones outlined in the description above. It is to be understood that the terms “including”, “comprising”, “consisting” and grammatical variants thereof do not preclude the addition of one or more components, features, steps, or integers or groups thereof and that the terms are to be construed as specifying components, features, steps or integers. If the specification or claims refer to “an additional” element, that does not preclude there being more than one of the additional element. It is to be understood that where the claims or specification refer to “a” or “an” element, such reference is not be construed that there is only one of that element. It is to be understood that where the specification states that a component, feature, structure, or characteristic “may”, “might”, “can” or “could” be included, that particular component, feature, structure, or characteristic is not required to be included. Where applicable, although state diagrams, flow diagrams or both may be used to describe embodiments, the invention is not limited to those diagrams or to the corresponding descriptions. For example, flow need not move through each illustrated box or stare, or in exactly the same order as illustrated and described. Methods of the present invention may be implemented by performing or completing manually, automatically, or a combination thereof, selected steps or tasks. The descriptions, examples, methods and materials presented in the claims and the specification are not to be construed as limiting but rather as illustrative only. Meanings of technical and scientific terms used herein are to be commonly understood as by one of ordinary skill in the art to which the invention belongs, unless otherwise defined. The present invention may be implemented in the testing or practice with methods and materials equivalent or similar to those described herein. While the invention has been described with respect to a limited number of embodiments, these should not be construed as limitations on the scope of the invention, but rather as exemplifications of some of the preferred embodiments. Other possible variations, modifications, and applications are also within the scope of the invention. Accordingly, the scope of the invention should not be limited by what has thus far been described, but by the appended claims and their legal equivalents. All publications, patents and patent applications mentioned in this specification are herein incorporated in their entirety by reference into the specification, to the same extent as if each individual publication, patent or patent application was specifically and individually indicated to be incorporated herein by reference. In addition, citation or identification of any reference in this application shall not be construed as an admission that such reference is available as prior art to the present invention. To the extent that section headings are used, they should not be construed as necessarily limiting.
48,236
11860263
DETAILED DESCRIPTION The following description is merely exemplary in nature and is not intended to limit the present disclosure, its application or uses. It should be understood that throughout the drawings, corresponding reference numerals indicate like or corresponding parts and features. In accordance with one or more exemplary embodiments, methods and systems for radar detection and position estimation are described herein. An embodiment of a radar system is configured to estimate a position and/or velocity of an object. An object may be any feature or condition that reflects transmitted radar signals. The radar system may be included in or connected to a vehicle for detection of objects such as road features, road obstructions, other vehicles, trees, people and others. The radar system is not limited to use with vehicles, and may be used in any context (e.g., weather, aviation and others). The radar system is configured to transmit radar signals from a series or plurality of transmitters based on generated code sequences. In one embodiment, the transmitters are configured to simultaneously emit a respective radar signal, for example, as part of a MIMO system. An embodiment of a method of coding transmission signals and/or performing radar detection and object property estimation includes generating a sequence of repeated codes, which is applied to each transmitter for transmission of an encoded radar signal. Each code in the sequence is repeated and has a number of repetitions that is at least as high as the number of transmitters. The code sequence is a variable code sequence, in that the length of each code is different than the length of the other codes within the code sequence. For example, the code sequence includes at least a first code having a first code length, which is repeated in the sequence according to a selected number of repetitions. A second code in the sequence has a code length that is different than the code length of the first code, and the second code is repeated in the sequence according to a selected number of repetitions. Additional codes may be successively added to the code sequence. The length of each code is selected, in one embodiment, to reduce or minimize overlap between replicas of codes, so that ambiguities can be reduced and/or resolved. Embodiments described herein present a number of advantages. For example, signals transmitted using the coding scheme described herein allow for effective separation of return signals associated with multiple transmitters. Conventional radar techniques utilizing MIMO systems can suffer from ambiguities due to large numbers of transmitters transmitting coded signals over a single time frame. Embodiments described herein provide for radar systems with multiple transmitters that are robust to Doppler ambiguity. FIG.1shows an embodiment of a motor vehicle10, which includes a vehicle body12defining, at least in part, an occupant compartment14. The vehicle body12also supports various vehicle subsystems including an engine assembly16, and other subsystems to support functions of the engine assembly16and other vehicle components, such as a braking subsystem, a steering subsystem, a fuel injection subsystem, an exhaust subsystem and others. The vehicle10includes aspects of a radar system20for detecting and tracking objects, which can be used to alert a user, perform avoidance maneuvers, assist the user and/or autonomously control the vehicle10. The radar system20includes one or more radar sensing assemblies22, each of which may include one or more transmit elements and/or one or more receive elements. The vehicle10may incorporate a plurality of radar sensing assemblies disposed at various locations and having various angular directions. For example, each radar sensing assembly22includes a transmit portion and a receive portion. The transmit and receive portions may include separate transmit and receive antennas or share one or more antennas in a transceiver configuration. Each radar sensing assembly22may include additional components, such as a low pass filter (LPF) and/or a controller or other processing device. In one embodiment, the radar sensing assembly22includes multiple transmitters and one or more receivers. For example, the radar sensing assembly is configured as a multi-input and multi-output (MIMO) transmitter/receiver assembly. The radar sensing assemblies22communicate with one or more processing devices, such as processing devices in each assembly and/or a remote processing device such as an on-board processor24and/or a remote processor26. The remote processor26may be part of, for example, a mapping system or vehicle diagnostic system. The vehicle10may also include a user interaction system28and other components such as a GPS device. The radar system20is configured generally to acquire radar signals and analyze the radar signals to detect an object and estimate one or more properties of the object. Examples of such properties include position, angle, velocity and/or acceleration. The position and/or velocity are estimated (e.g., by integrating acquired signal pulses over a selected time frame). FIG.2illustrates aspects of an embodiment of a computer system30that is in communication with or is part of the radar system20, and that can perform various aspects of embodiments described herein. The computer system30includes at least one processing device32, which generally includes one or more processors for performing aspects of radar detection and analysis methods described herein. The processing device32can be integrated into the vehicle10, for example, as the on-board processor24, or can be a processing device separate from the vehicle10, such as a server, a personal computer or a mobile device (e.g., a smartphone or tablet). For example, the processing device32can be part of, or in communication with, one or more engine control units (ECU), one or more vehicle control modules, a cloud computing device, a vehicle satellite communication system and/or others. The processing device32may be configured to perform radar detection and analysis methods described herein, and may also perform functions related to control of various vehicle subsystems. Components of the computer system30include the processing device32(such as one or more processors or processing units) and a system memory34. The system memory34may include a variety of computer system readable media. Such media can be any available media that is accessible by the processing device32, and includes both volatile and non-volatile media, removable and non-removable media. For example, the system memory34includes a non-volatile memory36such as a hard drive, and may also include a volatile memory38, such as random access memory (RAM) and/or cache memory. The computer system30can further include other removable/non-removable, volatile/non-volatile computer system storage media. The system memory34can include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out functions of the embodiments described herein. For example, the system memory34stores various program modules40that generally carry out the functions and/or methodologies of embodiments described herein. For example, a signal generation module42may be included to perform functions related to generating code sequences and transmission of radar signals, and an analysis module44may be included to perform functions related to acquiring and processing received signals, and/or position estimation and range finding. The system memory34may also store various data structures46, such as data files or other structures that store data related to radar detection and analysis. As used herein, the term “module” refers to processing circuitry that may include an application specific integrated circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and memory that executes one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality. The processing device32can also communicate with one or more external devices48such as a keyboard, a pointing device, and/or any devices (e.g., network card, modem, etc.) that enable the processing device32to communicate with one or more other computing devices. In addition, the processing device32can communicate with one or more devices that may be used in conjunction with the radar system20, such as a Global Positioning System (GPS) device50and a camera52. The GPS device50and the camera52can be used, for example, in combination with the radar system20for autonomous control of the vehicle10. Communication with various devices can occur via Input/Output (I/O) interfaces54. The processing device32may also communicate with one or more networks56such as a local area network (LAN), a general wide area network (WAN), and/or a public network (e.g., the Internet) via a network adapter58. It should be understood that although not shown, other hardware and/or software components could be used in conjunction with the computer system30. Examples include, but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, and data archival storage systems, etc. FIG.3depicts an example of a transmitter/receiver array, which may be part of a radar sensing assembly22, but is not so limited. In this example, the array is part of a multi-input and multi-output (MIMO) assembly, which includes a set of transmitters60and a set of receivers62. In various embodiments, the MIMO assembly can include a set of transducers, with each transducer serving as both a transmitter and a receiver. The transmitter/receiver array is shown as having one receiver62, however the transmitter/receiver array can have any number of receivers62. Each transmitter includes a signal generator64, a trigger circuit66, an amplifier68and a transmitter antenna70. The signal generator64generates an RF pulse sequence for transmission. The pulse sequence is encoded such that each transmitter antenna70emits a signal encoded with various codes. The RF pulse sequence, in one embodiment, is a linear frequency modulated (LFM) signal, also known as a chirp signal, in which the frequency of the signal increases from a first frequency to a second frequency over the duration of the signal in a linear fashion. The trigger circuit66provides the chirp signal to the transmitter antenna70according to a time schedule. In various embodiments, the trigger circuits of the transmitters are synchronized for time-division multiplexing of its transmitted signals. A coded signal from the trigger circuit66is amplified at the amplifier68and provided to transmitter antenna70which propagates the signal. Return signals reflected from the surrounding environment are received by the receiver62. A receiver antenna63receives reflections from various objects in the environment, and provides the signal to an amplifier74. The amplified signal is provided to a multiplexer circuit76, which is synchronized with the trigger circuit66in order to establish a phase relation between the set of transmitters60and the receiver(s)62. The amplified and synchronized signal is then input to an analog-to-digital (A/D) converter78. Transmitters (Tx) can be configured to transmit radar signals according to any suitable transmission regime. Examples of such regimes include time-division multiplexing (TDMS) and code division multiple access (CDMA) transmission regimes. FIG.4shows an example of conventional transmission sequences that are emitted simultaneously (or with a fixed phase difference) by each of a plurality of transmitters Tx (e.g., a MIMO array). The plurality of transmitters Tx include a number N of transmitters Txi, denoted as Tx1, . . . , TxN. Each transmitter Txi receives signals from a code generator and emits a radar signal that is encoded according to a respective code sequence80. The code sequence80includes a number Q of successive repetitions of a code82, denoted as Si. The code sequence80can be represented by Q repetitions of a code Si(codes SiI, . . . , SiQ) in sequence. Each code80may be configured as a “code group” including a number of individual code elements or sub-codes. The sub-codes are referred to herein as “symbols.” For example, each transmitter Txi transmits a chirped signal according to a respective code sequence80, which includes a code82having a series of symbols84. The number of symbols84in each code82is selected to be greater than or equal to the number of transmitters. A “code” or “symbol” can be expressed as a selected waveform or other property of a transmission signal. For example, the code82for transmitter Tx1is made up of four symbols84denoted as a1, . . . a4, and the code82for transmitter Tx2is made up of four symbols84denoted as b1, . . . b4. The code82for transmitter Tx3is made up of four symbols84denoted as c1, . . . c4, and the code82for transmitter Tx4is made up of four symbols84denoted as d1, . . . d4. The repetition rate is based on the temporal length (also referred to simply as “length”) of the code82, the length of the individual symbols84and the transmission time frame. The transmitted signals are reflected, and combined reflected signals are received at a receiver Rx. In the example ofFIG.4, the receiver Rx detects a sequence86of reflected signals. For example, signals corresponding to symbols a1, b1, c1and d1reflect at a reflecting object87and combine to generate a received signal g1. This process continues to generate a series of combined signals88making up the sequence86, each of which includes a series of repeated symbols g1, g2, g3and g4. Although only four transmitters and code sequences are shown, the assembly can include any number of transmitters. Generally, higher numbers of transmitters can lead to ambiguity issues realized by high side lobes and lower resolutions. Conventionally, the codes82transmitted by each transmitter are of equal code length. For example, the code length of each code82is the same and equal to the number of transmitters (which in this example is four) multiplied by the length of each symbol. As the number of transmitters increase, and the number of symbols increases, the number Q of repetitions decreases. Therefore, the code repetition rate is 1/(code sequence length). In the example ofFIG.4, the repetition rate is 1/(4*symbol length) Such reduced repetition rates can lead to ambiguities that are difficult to resolve. MIMO radar is an efficient technique to increase the angular resolution with multiple transmissions. However, the multiple transmissions cause challenges such as Doppler ambiguity. The Doppler ambiguity issue becomes more severe as the number of transmitters increases and is apparent, for example, in time-division multiplexing (TDMS) and code division multiple access (CDMA) transmissions. The ambiguity issue is realized by high side-lobes in calculated Doppler spectra. Embodiments described herein present a solution to the above challenge by providing for a variable length coding scheme for MIMO and/or other multiple transmitter radar systems. An embodiment of a method of detecting and estimating a property of an object includes constructing a code sequence including a plurality of codes. Each code in the sequence is generated by determining a number of symbols (e.g. by a random number generator). Each code has a number of symbols that is greater than or equal to a number of transmitters, and each code has a different code length (e.g., number of symbols, or length of individual symbols). The method partitions a long sequence into blocks of symbols, where each block is collectively referred to herein as a “code.” The length of the code is varied by varying the number of symbols in a code or block, and/or varying the length of individual symbols. The structured variations of the code lengths result in low side-lobes in the Doppler spectrum, and thus robustness to Doppler ambiguity issues, which is a major challenge in MIMO radar. Embodiments provide a solution in the form of, for example, the ability to use a large number of transmitters that is robust to Doppler ambiguity. An example of a code sequence S having varying code lengths is shown inFIG.5. The code sequence S is applied to a plurality of transmitters for transmission of a measurement signal. In this example, the code sequence S includes three different code lengths (durations). A first code S0has a first code length T1, a second code S1has a second code length T2, and a third code S2has a third code length T3. In this example, the first code length T1is shorter than the second code length T2, and the second code length is shorter than the third code length T3(i.e., T1<T2<T3). The code length may be defined by the number of symbols in a code. For example, the shortest code S0has a number of symbols that is greater than or equal to the number of transmitters. As shown, each code is repeated three times. It is noted that the number and length of codes shown inFIG.5, as well as the number of repetitions, is not intended to be limiting. An embodiment of a method of generating a code sequence for a multiple transmitter radar system is described as follows. The method includes generating a number M of groups of random symbols, where M is at least equal to the number N of transmitters Tx in the system. In one embodiment, the codes and/or symbols are generated by using a random or pseudorandom series of codes or symbols (e.g., a pseudo-random bit sequence (PRBS)). Each group constitutes a sequence or code Si, where i is a number from zero to M−1. The code Sihas a code length Ni, which may be equal to the length of the symbols multiplied by the number of symbols. The code length is different for each sequence Si. The result is a sequence S of codes Si. The code sequence S is transmitted by all of the transmitters. The code sequence S can be represented as a matrix for each code5. Each matrix is referred to as a “code matrix” or “Simatrix,” where i is an index of the code matrix. An example of an Simatrix or code matrix600is shown inFIG.6. The Si matrix defines a code Si, and has a row for each transmitter Tx1-Tx4. Each row is populated with a series of symbols, thereby defining a plurality of columns. The number of columns is selected based on a desired code length, and is at least as high as the number of transmitters. In the Simatrix600ofFIG.6, a first column includes symbols a1, b1, c1and d1, a second column includes symbols a2, b2, c2and d2, a third column includes symbols a3, b3, c3and d3, and a fourth column includes symbols a4, b4, c4and d4. The dimensions of the Si matrix600are (NTx,Ni), where NTxis a number of transmitters Tx, and Niis the code length of the i-th code (or code group). In an embodiment, the code lengths Niare set so that there is minimal overlap between the replicas of all codes. This is obtained by having the code length Ni(e.g., the number of symbols) be equal or greater than the number of transmitters (Ni≥NTx)), to ensure the invertibility of each symbol. Another condition may be that all pairs of adjacent codes Si(for i=1 through M) have code lengths that do not have a common divider. The repetition number is denoted as Q, where Q equals the total number of sub-groups or symbols divided by M Each code Siis thus repeated Q times. This repetition is realized through a final or overall code sequence5, which in an embodiment includes individual codes denoted by symbol Sji, where i is the code index number and j is a repetition index representing a repetition number (i.e., a number of repetitions). Thus, there are M code matrices S1, S2, . . . , SM, where each code matrix is repeated Q times. An example of an overall code sequence for M codes, where each code matrix is repeated Q times, follows: S=S11,S21, . . . ,SQ1,S12,S22, . . . ,SQ2,S1M,S2N, . . . ,SQM. Return signals Y based on reflections of radar signals encoded using the above sequence can be represented as: Y=aRx(θ)aTxT(θ)S, where aRx(θ) is the response of each receiver Rx to an angle θ. aTTX(θ) is the Tx response to the angle θ, and S represents a code sequence matrix. FIG.7shows an example of a Doppler spectrum90generated from processing of return signals reflected from MIMO transmissions encoded according to the code sequence ofFIG.5. As shown, there are peaks at various Doppler frequencies. The correct peak is denoted as peak92, which is a combination of individual peaks94,96and98associated with code sequence intervals T1, T2and T3, respectively. Due to the varying code lengths in the code sequence, ambiguous peaks exist but they are at different frequencies. As a result, the Doppler spectrum at the correct frequency constructs at the correct frequencies and destructs at the incorrect (ambiguous) frequencies. FIG.8illustrates aspects of an embodiment of a computer-implemented method100of radar detection and analysis, and object property estimation, which includes detecting an object and estimating an object property (e.g., location or position, direction and/or velocity). The method100may be performed by a processor or processors disposed in a vehicle (e.g., processing device32, as an ECU or on-board computer) and/or disposed in a device such as a smartphone, tablet or smartwatch. The method100is discussed in conjunction with the radar system20ofFIG.1and components shown inFIG.2for illustration purposes. It is noted that aspects of the method100may be performed by any suitable processing device or system. The method100includes a plurality of stages or steps represented by blocks101-105, all of which can be performed sequentially. However, in some embodiments, one or more of the stages can be performed in a different order than that shown or fewer than the stages shown may be performed. At block101, a code sequence S is generated (e.g., by the signal generation module42and/or the signal generator64) by selecting a series of random or pseudo-random symbols (e.g., the symbols ofFIG.6) that make up each code Si. The length of the code sequence S is the same for each transmitter Tx. The rows of matrix S are transmitted from each transmitter Tx as a sequence of code symbols. As each row is the same length, the total number of symbols transmitted from each transmitter is the same. The length and repetition rate of the code sequence S is selected as discussed above. In one embodiment, each code Sihas a different number of symbols. Thus, the length of each code Simay be expressed as a number of symbols. For example, a processing device selects a length for each code Si, and populates a matrix or other data structure (e.g., the matrix ofFIG.6) with random or pseudo-random symbols. The number of symbols in each code Sivaries within the sequence. The coding method partitions a long sequence into blocks of symbols, where the code length varies from one block to the other. The structured variations of the code length result in low side-lobes in the Doppler spectrum, and thus provide robustness to Doppler ambiguity issues. Therefore, embodiments described herein are advantageous at least because of this robustness, as ambiguity issues represent a major challenge in conventional MIMO radar. At block102, radar signals are transmitted by the transmitters Tx according to the code sequence. Each transmitter Tx transmits radar signals having a series of pulses. As described herein, “pulses” refer to a series of repeating waveforms, which are not limited to those described herein. In one embodiment, the transmit element transmits a linear frequency-modulated continuous wave (LFM-CW) signal. This signal may be referred to as a “chirp signal,” and each pulse may be referred to as a “chirp.” Each transmitter transmits (e.g., simultaneously) a signal according to the same code sequence S. For example, each transmitter repeats a first code by a selected number of repetitions Q, repeats a second code by Q repetitions, and successively repeats and transmits subsequent codes. As the codes have different lengths, the code length within the sequence changes for all the transmitters simultaneously. A return signal is detected or measured by one or more receive elements as a measurement signal. For example, analog signals detected by the receive elements are sampled and converted to digital signals, referred to herein as detection signals. The return signal Y includes a series of reception signals Yji, the reception signal corresponding to a reflection of the emitted signal symbol Sjiis shown as: Y=Y11, . . . ,YQ1,Y12, . . . ,Y1M, . . . ,YQM. Each reception signal Yjicorresponds to a reflection of an emitted signal symbol Sji. The total matrix of received symbols is: Yij=aRx(θ)aTxT(θ)Sij, where aRx(θ) is an Rx array response for an angle θ, and aTTx(θ) is a transmitter array response for the angle θ. The dimensions of Yjiare (NRx,NTx), where NRxand NTxare the number of receivers Rx and transmitters Tx, respectively. The dimensions of Sjiare (NTx,Ni). At block103, a processing device, such as the processor32, transforms each return pulse into the frequency domain by using a Fourier transform. In one embodiment, the processing device32uses a fast Fourier transform (FFT) algorithm (also referred to as “range FFT”) to generate range spectra associated with each return pulse. The range FFT is a one-dimensional FFT configured to transform the return pulses into range intensity values that can be used to estimate the range (referred to as the “range domain”) of a reflection. To generate an FFT output, range bins defined by the range FFT are scanned, and range bins corresponding to the same range are extracted from all receive antennas. The result of this extraction is a vector of range bins that has a length equal to the number of receive antennas. The output of the Range FFT for a transmitter i is then multiplied with a decoding matrix (e.g. a pseudo inverse matrix) and the output Zi can be expressed by: zi=aRx(θ)aTxT(θ)=Yij(sij)H(sij(sij)H)1. At block104, a Doppler frequency spectrum is generated for use in estimating properties such as velocity. In one embodiment, the range and velocity of the object is determined by applying a second Fourier transform to estimate the frequency shift (Doppler frequency) In one embodiment, a processor uses a Discrete Fourier transform (DFT) algorithm (“Doppler DFT”) to generate frequency spectra associated with each return pulse, which can be used to estimate a position and velocity value associated with each frequency spectrum. The Doppler DFT output can be expressed as a matrix F. The output of the Doppler DFT per each transmitter (per range bin), can be represented by: R=UF where R is an output matrix, U is a stacked Zimatrix (e.g., U=[vec(Z0), vec(Z1), . . . ]), and F is a DFT matrix. AT block105, directional properties are estimated for detected objects. For example, beamforming is performed for each range bin and Doppler DFT bin to generate an angle-range-Doppler matrix W, represented by W=AR, where A=[v0,v1, . . . ], and vi=vec(aRx(θi)aTTx(θi)). FIG.9shows an example of a Doppler spectrum110generated by the method100. In this example, there are twelve transmitters Txs and 16 receivers Rxs. Three different code lengths are selected so that the number of symbols in the shortest code is greater than or equal to the number of Txs. In this example, three code lengths are selected (i.e., 13, 14 and 15). Note that the code lengths are expressed as the number of symbols, and that the code lengths do not have a common divisor. Each code in this example, is a pseudorandom bit sequence ejϕn, where ϕnis about U(0,2π). The Doppler spectrum110shows frequency peaks112(dashed lines) resulting from a conventional fixed-code method, and also shows frequency peaks114resulting from variable coding according to embodiments described herein. The frequency peaks114include a repeating, high intensity peak116at the correct Doppler frequency. As is evident, the peak116has a higher intensity, and ambiguous peaks are of a much lower intensity, so that the correct peak is more easily identifiable. In contrast, the peaks generated by conventional methods have a much lower contrast between peaks and are thus more difficult to analyze and estimate the correct frequency. While the above disclosure has been described with reference to exemplary embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from its scope. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the disclosure without departing from the essential scope thereof. Therefore, it is intended that the present disclosure not be limited to the particular embodiments disclosed, but will include all embodiments falling within the scope thereof
29,109
11860264
DETAILED DESCRIPTION A ranging apparatus of an embodiment is a ranging apparatus adopting communication type ranging by a phase detection scheme, the ranging apparatus including: a transmitting circuit configured to be able to transmit by a plurality of channels used for data communication and configured to transmit a transmission signal obtained by modulating transmission data; and a control circuit configured to cause a plurality of continuous waves having mutually different frequencies in a same channel to be generated as continuous waves used for ranging by the phase detection scheme. An embodiment of the present invention will be described in detail below with reference to drawings. Embodiment FIG.1is a block diagram showing a ranging apparatus according to an embodiment of the present invention. The ranging apparatus in the present embodiment also serves as a data communication apparatus adopting FSK (frequency shift keying) modulation and is in a configuration in which a transmitting/receiving circuit is shared between a circuit portion for ranging and a circuit portion for data communication. Further, in the present embodiment, a plurality of CWs (continuous waves) in a band in one channel among transmission channels used for data communication are used for ranging, so that ranging for a relatively long distance is made possible. In the present embodiment, an example will be described in which a phase detection scheme using CWs which are unmodulated carriers is adopted, and communication-type ranging in which a distance between apparatuses is determined by communication is adopted. FIG.2is an explanatory diagram for illustrating an example of a ranging system that performs the communication-type ranging. The ranging system ofFIG.2measures a distance between an apparatus30and an apparatus40by communication between the ranging apparatus30and the ranging apparatus40. The apparatus30and the apparatus40have the same configuration. The apparatus30is provided with a transmitting portion32and a receiving portion33. The transmitting portion32generates a CW used for ranging (hereinafter also referred to as a ranging signal). The ranging signal from the transmitting portion32is supplied to an antenna34via a switch35and transmitted to the apparatus40. A ranging signal from the apparatus40arrives at the antenna34of the apparatus30. The ranging signal is supplied to the receiving portion33via the switch35and received by the receiving portion33. Note that a transmitting portion42, a receiving portion43, an antenna44and a switch45of the apparatus40have similar configurations of the transmitting portion32, the receiving portion33, the antenna34and the switch35of the apparatus30, respectively. Thereby, a ranging signal from the apparatus30is received by the apparatus40, and a ranging signal from the apparatus40is received by the apparatus30. Digital portions31and41have similar configurations and control each portion of the apparatus30and the apparatus40, respectively. In other words, the digital portion31causes the transmitting portion32to generate a ranging signal to be transmitted to the apparatus40and causes the receiving portion33to receive a ranging signal from the apparatus40. Similarly, the digital portion41causes the transmitting portion42to generate a ranging signal to be transmitted to the apparatus30and causes the receiving portion43to receive a ranging signal from the apparatus30. (Example of Ranging Operation) Next, an example of ranging operation will be described, using a method described in Patent Literature 2. The apparatus30and the apparatus40mutually transmit and receive ranging signals (CWs) which are unmodulated carriers with a frequency fL, and mutually transmit and receive ranging signals (CWs) which are unmodulated carriers with a frequency fH. Using angular frequencies ωKand ωCof oscillation signals generated by oscillators of the apparatuses30and40, the oscillators not being shown, frequencies are expressed as 2πfL=ωC−ωBand 2πfH=ωC+ωB. The frequencies of the oscillation signals of the oscillators of the apparatuses30and40are strictly not the same, the oscillators not being shown. In consideration of the disposition, it is assumed that the apparatus30transmits transmission signals of two waves, a transmission signal with an angular frequency of ωC1+ωB1and a transmission signal with an angular frequency of ωC1−ωB1. Similarly, it is assumed that the apparatus40transmits transmission signals of two waves, a transmission signal with an angular frequency of ωC2+ωB2and a transmission signal with an angular frequency of ωC2−ωB2. The apparatuses30and40receive mutual transmission signals. Further, it is assumed that an initial phase of an oscillation signal with an angular frequency of ωC1and an initial phase of an oscillation signal with a frequency of ωB1of the apparatus30are θC1and θB1, respectively, and it is assumed that an initial phase of an oscillation signal with an angular frequency of ωC2and an initial phase of an oscillation signal with a frequency of ωB2of the apparatus40are θC2and θB2, respectively. An amount of phase shift that occurs before the transmission signal with the angular frequency ωC1+ωB1, among transmission signals transmitted from the apparatus30to the apparatus40, is received by the apparatus40after a delay τ1is indicated by θH1(t), and an amount of phase shift that occurs before the transmission signal with the angular frequency ωC1−ωB1is received by the apparatus40is indicated by θL1(t). Similarly, an amount of phase shift that occurs before the transmission signal with the angular frequency ωC2+ωB2, among transmission signals transmitted from the apparatus40to the apparatus30, is received by the apparatus30after a delay τ2is indicated by θH2(t), and an amount of phase shift that occurs before the transmission signal with the angular frequency ωC2+ωB2is received by the apparatus30is indicated by θL2(t). It is shown in Patent Literature 2 that, in this case, Equation (1) below is satisfied: {θH1(t)−θL1(t)}+{θH2(t)−θL2(t)}=(θτH1−θτL1)+(θτH2−θτL2)  (1) Here, the following are assumed: θτH1=(ωC1+ωB1)τ1(2) θτH2=(ωC2+ωB2)τ2(3) θτL1=(ωC1−ωB1)τ1(4) θτL2=(ωC2−ωB2)τ2(5) Since the radio wave delays τ1and τ2between the apparatuses30and40are the same regardless of a traveling direction, Equation (6) is obtained from Equation (1). {θH1(t)−θL1(t)}+{θH2(t)−θL2(t)}=(θτH1−θτL1)+(θτH2−θτL2)=2×(ωB1+ωB2)τ1(6) When a radio wave speed is indicated by c; a distance between the apparatuses30and40is indicated by R; and delay time is indicated by τ, τ=R/c is obtained. By substituting τ=R/c for Equation (6), Equation (7) below is obtained. (½)×{((θτH1−θτL1)+(θτH2−θτL2)}=(ωB1+ωB2)×(R/c)  (7) From Equation (7), it is seen that the distance R between the apparatuses30and40can be calculated by the angular frequencies ωB1and ωB2, and a result of addition of a phase difference determined from the two frequency waves received by the apparatus30and a phase difference determined from the two frequency waves received by the apparatus40. Note that Equation (7) above is an example in a case where transmitting and receiving processes are simultaneously performed on the apparatuses30and40. However, frequency bands where simultaneous transmitting and receiving cannot be performed exist due to provisions of Japanese Radio Law. Therefore, Patent Literature 2 discloses an example compatible with a case of time-series transmitting and receiving. FIG.3is an explanatory diagram showing an example of transmission signals of the apparatuses30and40in this case by arrows. In a sequence shown inFIG.3, Equation (8) below holds. Here, t0, D and T indicate delay times shown inFIG.3. θH1(t)+θH2(t+t0)+θH1(t+t0+D)+θH2(t+D)−{θL1(t+T)+θL2(t+t0+T)+θL1(t+t0+D+T)+θL2(t+D+T)}=2{(θτH1−θτL1)+(θτH2−θτL2)}=4×(ωB1+ωB2)τ1(8) In other words, in the sequence ofFIG.3, the apparatus30transmits a transmission wave with the angular frequency ωC1+ωB1(hereinafter referred to as a transmission wave H1A) at a predetermined timing. Immediately after receiving the transmission wave H1A, the apparatus40transmits a transmission wave with the angular frequency ωC2+ωB2(hereinafter referred to as a transmission wave H2A). Furthermore, after transmitting the transmission wave H2A, the apparatus40transmits a transmission wave with the angular frequency ωC2+ωB2(hereinafter referred to as a transmission wave H2B) again. After receiving the transmission wave H2B for the second time, the apparatus30transmits a transmission wave with the angular frequency ωC1+ωB1(hereinafter referred to as a transmission wave HIB) again. Furthermore, the apparatus30transmits a transmission wave with the angular frequency ωC1−ωB1(hereinafter referred to as a transmission wave L1A). Immediately after receiving the transmission wave L1A, the apparatus40transmits a transmission wave with the angular frequency ωC2−ωB2(hereinafter referred to as a transmission wave L2A). Furthermore, after transmitting the transmission wave L2A, the apparatus40transmits a transmission wave with the angular frequency ωC2−ωB2(hereinafter referred to as a transmission wave L2B) again. After receiving the transmission wave L2B for the second time, the apparatus30transmits a transmission wave with the angular frequency ωC1−ωB1(hereinafter referred to as a transmission wave L1B) again. Thus, as shown inFIG.3, the apparatus40acquires a phase θH1(t) based on the transmission wave H1A during a predetermined time from predetermined reference time 0, acquires a phase θH1(t+t0+D) based on the transmission wave H1B during a predetermined time from time t0+D, acquires a phase θL1(t+T) based on the transmission wave L1A during a predetermined time from time T, and acquires a phase θL1(t+t0+D+T) based on the transmission wave L1B during a predetermined time from time t0+D+T. Further, the apparatus30acquires a phase θH2(t+t0) based on the transmission wave H2A during a predetermined time from time to, acquires a phase θH2(t+D) based on the transmission wave H2B during a predetermined time from time D, acquires a phase θL2(t+t0+T) based on the transmission wave L2A during a predetermined time from time t0+T, and acquires a phase θL2(t+D+T) based on the transmission wave L2B during a predetermined time from time D+T. At least one of the apparatuses30and40transmits phase information, that is, the determined four phases, the two phase differences, or an operation result of Equation (8) above about the phase differences to the other apparatus. A controlling portion of the apparatus30or40that receives the phase information calculates a distance by operation of Equation (8) above. (Configuration) FIG.1shows an example of a specific configuration of the apparatus30(or40) ofFIG.2. A transmitting/receiving circuit20corresponds to the transmitting portion32or42and the receiving portion33or43ofFIG.2. InFIG.1, the digital portion31or41ofFIG.2is configured with a controlling portion11, a transmission data processing portion12, a ranging signal transmitting processing portion13, a receive data processing portion14, a ranging processing portion15and switches16and17. The controlling portion11controls each portion of the ranging apparatus ofFIG.1. The controlling portion11may be configured with a processor using a CPU (central processing unit), an FPGA (field programmable gate array) and the like, may operate in accordance with a program stored in a memory not shown to control each portion, or may realize a part or all of functions by a hardware electronic circuit. The transmission data processing portion12and the receive data processing portion14are configured with a transmission data processing circuit and a receive data processing circuit for data communication, respectively, and the ranging signal transmitting processing portion13and the ranging processing portion15are configured with a ranging signal transmitting processing circuit and a ranging processing circuit for ranging, respectively. The transmitting/receiving circuit20is a circuit shared by data communication and ranging. An output of the transmission data processing portion12and an output of the ranging signal transmitting processing portion13are supplied to the transmitting/receiving circuit20via the switch16. The switch16is controlled by the controlling portion11to selectively provide the output of the transmission data processing portion12or the output of the ranging signal transmitting processing portion13to the transmitting/receiving circuit20. The transmission data processing portion12is controlled by the controlling portion11to generate transmission data and output the transmission data to the switch16. At the time of data communication, the switch16selects the output of the transmission data processing portion12and outputs the output to the transmitting/receiving circuit20. The transmitting/receiving circuit20performs a process for generating a transmission signal by FSK modulation and FSK-modulating a receive signal to generate a baseband signal. In other words, a data generator21of the transmitting/receiving circuit20is provided with transmission data via the switch16. The data generator21generates data for FSK modulation based on the transmission data and outputs the data to an oscillator22. The oscillator22causes an oscillation frequency to change according to the inputted data. In this way, the transmission data is FSK-modulated, and a transmission signal is obtained from the oscillator22. Note that the oscillator22is capable of generating transmission signals with a plurality of frequencies corresponding to a plurality of channels. The controlling portion11is adapted to be capable of controlling the frequencies (the channels) of the transmission signals generated by the oscillator22. An output of the oscillator22is provided to a power amplifier23. The power amplifier23amplifies a transmission signal and outputs the transmission signal to an antenna25via a switch24. The switch24is controlled by the controlling portion11to connect the power amplifier23and the antenna25at the time of transmitting, and to connect the antenna25and a receiving processing portion26at the time of receiving. Thus, at the time of transmitting, the antenna25transmits a transmission signal from the power amplifier23. At the time of receiving, the antenna25receives a receive signal and provides the receive signal to the receiving processing portion26via the switch24. The receiving processing portion26performs FSK demodulation processing for the receive signal and outputs a demodulated signal. The demodulated signal from the receiving processing portion26of the transmitting/receiving circuit20is supplied to the switch17. The switch17is controlled by the controlling portion11to provide the output of the receiving processing portion26selectively to the receive data processing portion14or the ranging processing portion15. At the time of data communication, the switch17outputs a receive signal from the receiving processing portion26to the receive data processing portion14. The receive data processing portion14restores receive data from the inputted receive signal. In the present embodiment, the ranging signal transmitting processing portion13is controlled by the controlling portion11to generate a signal for outputting the ranging signals of the two frequency waves described above. In the present embodiment, in consideration of transmission by an FSK modulation scheme, for example, the ranging signal transmitting processing portion13continuously generates and outputs a high level (“H”) corresponding to a logical value “1”. Note that the continuation of “1” or “H” will be referred to as “continuous 1s” in description below. At the time of ranging, the controlling portion11causes the switch16to select an output of the ranging signal transmitting processing portion13and causes an output of the receiving processing portion26to be supplied to the ranging processing portion15by the switch17. The continuous 1s from the ranging signal transmitting processing portion13are provided to the data generator21via the switch16. An operation of the transmitting/receiving circuit20at the time of ranging is similar to the operation at the time of data communication. When the continuous 1s are inputted, the data generator21causes an oscillation output with a frequency corresponding to the continuous 1s to be outputted from the oscillator22. In other words, at the time of ranging, a transmission signal of the oscillator22is a CW which is an unmodulated carrier. For example, when the continuous 1s are inputted to the transmitting/receiving circuit20in a case where frequency deviation for the logical value “1” is set to 200 kHz, a CW with a frequency corresponding to a center frequency of a predetermined transmission channel plus 200 kHz is outputted from the oscillator22. Note that a transmission channel for a transmission signal from the oscillator22is set by the controlling portion11. It is conceivable to, using the method of causing a first wave to be generated, corresponding to continuous 1s, also cause a second wave to be generated, corresponding to continuous 1s. For example, two CWs each of which corresponds to continuous 1s are caused to be generated, using two transmission channels. In the case of ranging using two waves, a measurable distance is {light velocity c/(fH−fL)}×(½). In the case of causing two CWs to be generated using two channels, the measurable distance is restricted by a channel spacing. For example, if the channel spacing between transmission channels is 3 MHz, a ranging result repeats at a distance of about 50 m, and, therefore, the measurable distance is about 50 m. Therefore, in the present embodiment, control is performed so that ranging signals of two waves are caused to be generated in the same channel. In other words, the controlling portion11controls the ranging signal transmitting processing portion13to generate continuous 1s, and continuously generates and outputs a low-level (“L”) signal corresponding to a logical value “0”. Note that continuation of “0” or “L” will be referred to as “continuous 0s”. When the continuous 0s are inputted, the data generator21causes an oscillation output with a frequency corresponding to the logical value “0” to be outputted from the oscillator22. In other words, a transmission signal from the oscillator22in this case is also a CW which is an unmodulated carrier. For example, when the continuous 0s are inputted to the transmitting/receiving circuit20in a case where frequency deviation for the logical value “0” is set to −200 kHz, a CW with a frequency corresponding to a center frequency of a predetermined transmission channel minus 200 kHz is outputted from the oscillator22. In the present embodiment, control is performed so that, for example, a CW generated in a predetermined channel corresponding to continuous 1s is used as the first wave between two waves of ranging signals, and, for example, a CW generated in the same channel as the first wave corresponding to continuous 0s is used as the second wave. At the time of ranging, the controlling portion11causes the switch16to select an output of the ranging signal transmitting processing portion13and causes an output of the receiving processing portion26to be supplied to the ranging processing portion15by the switch17. The continuous 1s or continuous 0s from the ranging signal transmitting processing portion13are provided to the data generator21via the switch16. An operation of the transmitting/receiving circuit20at the time of ranging is similar to the operation at the time of data communication. The data generator21causes an oscillation output with a frequency corresponding to the continuous 1s to be outputted from the oscillator22when the continuous 1s are inputted, and causes an oscillation output with a frequency corresponding to the continuous 0s to be outputted from the oscillator22when the continuous 0s are inputted. In other words, a transmission signal of the oscillator22is a CW which is an unmodulated carrier at the time of ranging, and a difference between transmission signal frequencies of the two waves corresponds to amounts of frequency deviation set corresponding to the logical values “1” and “0”. For example, when the continuous 1s are inputted to the transmitting/receiving circuit20in the case where frequency deviation for the logical value “1” is set to 200 kHz, a CW with a frequency corresponding to a center frequency of a predetermined transmission channel plus 200 kHz is outputted from the oscillator22. In the present embodiment, the CW in this case is used as a signal with the frequency fHbetween the two waves of the ranging signals described above. Further, in the present embodiment, a configuration is made in which, in the case where frequency deviation for the logical value “0” is set to −200 kHz, and the continuous 0s are inputted to the transmitting/receiving circuit20, a CW with a frequency corresponding to a center frequency of a transmission channel with the frequency fHminus 200 kHz is outputted from the oscillator22. In the present embodiment, the CW in this case is used as a signal with the frequency fLbetween the two waves of the ranging signals described above. Next, an operation of the embodiment configured as described above will be described with reference toFIGS.4and5.FIG.4is an explanatory diagram for illustrating frequency components of a ranging signal; andFIG.5is a flowchart for illustrating the operation of the embodiment. InFIG.4, a horizontal axis indicates frequency, transmission bands of N channels (ch) used for data communication are shown, N being a predetermined number, and up arrows indicate center frequencies of the channels. In the present embodiment, data communication and ranging are performed using the N transmission channels shown inFIG.4. ThoughFIG.4shows an example in which each transmission channel has a 3 MHz band (a channel spacing is 3 MHz), the channel spacing is not specially limited. In the example ofFIG.4, a band of a predetermined one channel is enlarged and shown at a lower part, and broken up arrows correspond to center frequencies of adjoining two channels.FIG.4shows an example in which the oscillator22is configured to generate such an oscillation output that frequency deviation corresponding to data “1” is 200 kHz, and frequency deviation corresponding to data “0” is −200 kHz. The controlling portion11judges whether a ranging mode is set or a data communication mode is set, at step S1ofFIG.5. For example, the controlling portion11may be adapted to set the ranging mode or the data communication mode according to a request from a host not shown. For example, the host may specify the ranging mode or the data communication mode according to a user operation. If judging that the ranging mode is not set, the controlling portion11performs a process corresponding to the data communication mode (step S2). In other words, the controlling portion11controls the transmission data processing portion12and the receive data processing portion14to perform data communication. The transmission data processing portion12generates transmission data. The transmission data is supplied to the data generator21of the transmitting/receiving circuit20via the switch16. The data generator21generates data for FSK modulation based on the transmission data and causes the oscillation frequency of the oscillator22to change. Thereby, an FSK-modulated signal corresponding to the transmission data is generated from the oscillator22. After being amplified by the power amplifier23, the FSK-modulated signal (a transmission signal) from the oscillator22is supplied to the antenna25via the switch24and transmitted. A receive signal induced in the antenna25is supplied to the receiving processing portion26via the switch24. The receiving processing portion26FSK-demodulates the receive signal to obtain a demodulated signal. During the data communication mode, the demodulated signal is supplied to the receive data processing portion14via the switch17. Receive data is restored from the inputted receive signal by the receive data processing portion14. In this way, data transmitting/receiving is performed in the data communication mode. If judging that the ranging mode is set, the controlling portion11causes the process to transition from step S1to step S3. For example, when desiring to determine a distance between a terminal including the ranging apparatus ofFIG.1and another apparatus, a user specifies the ranging mode. When the ranging mode is specified, the controlling portion11judges whether a first wave transmitting timing has come or not, at step S3. If the judgment is NO, the controlling portion11judges whether a second wave transmitting timing has come or not, at step S6. Here, if the judgment is NO, the controlling portion11judges whether a receiving timing has come or not, at step S9. For example, the controlling portion11may execute the ranging mode by a predetermined packet in data communication to perform control transmitting and receiving of ranging signals. If detecting that the first wave transmitting timing has come, at step S3, the controlling portion11causes the ranging signal transmitting processing portion13to generate continuous 1s (step S4). The continuous 1s from the ranging signal transmitting processing portion13are supplied to the data generator21via the switch16. The data generator21causes an oscillation output corresponding to the continuous 1s, that is, a CW which is an unmodulated carrier with an oscillation frequency corresponding to a center frequency of a channel plus 200 kHz to be generated from the oscillator22as a first wave output (step S5). For example, the data generator21causes a ranging signal CW1with a frequency corresponding to a center frequency of the n-th channel (ch) ofFIG.4plus 200 kHz to be generated from the oscillator22as a first wave. After being amplified by the power amplifier23, the first wave is supplied to the antenna25via the switch24and transmitted. Next, if judging that the first wave transmitting timing has not come, at step S3, the controlling portion11judges whether the second wave transmitting timing has come or not, at step S6. If judging that the second wave transmitting timing has come, the controlling portion11performs transmitting of a second wave of a ranging signal. In the present embodiment, the controlling portion11causes the ranging signal transmitting processing portion13to generate continuous 0s in order to cause the ranging signal of the second wave to be generated in the same channel as the first wave (step S7). The continuous 0s from the ranging signal transmitting processing portion13are supplied to the data generator21via the switch16. The data generator21causes an oscillation output corresponding to the continuous 0s, that is, a CW which is an unmodulated carrier with an oscillation frequency corresponding to a center frequency of a channel including the first wave minus 200 kHz to be generated from the oscillator22as a second wave output (step S5). For example, if the first wave is the ranging signal CW1ofFIG.4, the data generator21causes a ranging signal CW2with a frequency corresponding to the center frequency of the n-th channel (ch) minus 200 kHz to be generated from the oscillator22as the second wave. After being amplified by the power amplifier23, the second wave is supplied to the antenna25via the switch24and transmitted. In this way, the ranging signals of two waves in the same channel are outputted from the transmitting/receiving circuit20. In the example ofFIG.4, a frequency spacing between CW1and CW2which are the ranging signals is 400 kHz. Therefore, since a ranging result repeats at a distance of about 375 m in this case, the measurable distance can be extended to about 375 m. Note that if the ranging signals of two waves are assumed to be CW1and CW3in adjoining channels, the measurable distance is only about 50 m as described above. If judging that the second wave transmitting timing has not come, at step S6, the controlling portion11judges whether the receiving timing has come or not, at step S9. If judging that the receiving timing has come, the controlling portion11controls the switch24to supply a receive signal induced in the antenna25to the receiving processing portion26and obtains a demodulated signal by FSK demodulation. The ranging processing portion15captures the demodulated signal via the switch17and detects a phase. The ranging processing portion15performs ranging operation for determining a distance between the apparatus of the ranging processing portion15and the other apparatus using a result of the phase detection. Note that, in the case of adopting the method of Patent Literature 2, it is necessary for the apparatus or the other apparatus to transmit a result of phase detection to the counterpart apparatus. The controlling portion11may transmit the phase information to the counterpart apparatus, for example, by data communication using the transmission data processing portion12. Alternatively, the controlling portion11may receive the phase information from the counterpart apparatus by data communication. Thus, in the present embodiment, a configuration is possible in which a transmitting/receiving circuit is shared between a circuit portion for data communication adopting FSK demodulation and a circuit portion for ranging, and it is possible to suppress increase in a circuit scale. Further, in the present embodiment, a plurality of CWs in a band in one channel among transmission channels used for data communication are used for ranging signals, so that ranging for a relatively long distance is possible. Further, in the present embodiment, CWs of two waves are caused to be generated in one transmission channel. In comparison with a case of causing only one CW to be generated in one transmission channel using only continuous 1s, the number of CWs that can be used for ranging signals can be increased, and it is possible to improve ranging accuracy. Note that though an apparatus including both of a transmitting device and a receiving device for ranging and data communication is shown inFIG.1, the transmitting device and the receiving device may be configured as separate bodies. A transmitting device for ranging can be configured with the controlling portion11, the transmission data processing portion12, the ranging signal transmitting processing portion13, the switch16, the data generator21, the oscillator22, the power amplifier23and the antenna25ofFIG.1. Similarly, a receiving device for ranging can be configured with the controlling portion11, the receive data processing portion14, the ranging processing portion15, the switch17, the receiving processing portion26and the antenna25ofFIG.1. Further, not only the controlling portion11but also each of the transmission data processing portion12, the ranging signal transmitting processing portion13, the receive data processing portion14and the ranging processing portion15may be configured with a processor using a CPU, an FPGA and the like, may operate in accordance with a program stored in a memory not shown to control each portion, or may realize a part or all of functions by a hardware electronic circuit. Though an example of causing ranging signals of two waves to be generated in one transmission channel has been described in the above embodiment, the ranging signals of two waves may be caused to be generated in different transmission channels. For example, CW3in the (n−1)th channel and CW2in the n-th channel ofFIG.4may be the ranging signals of two waves. In this case, it is possible to extend the measurable distance to some extent. (Modification) FIG.6is an explanatory diagram for illustrating a modification. InFIG.6, a horizontal axis and a vertical axis indicate distance and phase, respectively, and two ranging results are shown. Since it is not possible to detect a detected phase difference beyond 2π, repeating occurs in a ranging result, and a plurality of distance candidates exist for a calculated detected phase difference. In the above embodiment, CWs (ranging signals) of two waves in the same channel are caused to be generated, and it is possible to lengthen a repeating distance. However, it is thought that ranging accuracy of a range result in the case of using CWs of two waves in the same channel is relatively low. Therefore, in the present modification, the CWs of two waves in the same channel are used only for correction of repeating, and a ranging result is obtained using another set of CWs. InFIG.6, a ranging result by a set of CWs of two waves (hereinafter referred to as a CW set for ranging) other than a set of CWs of two waves in the same channel (hereinafter referred to as a CW set for repeating correction) is shown by a solid line. For the CW set for ranging, a transmission channel is selected so that a frequency difference between the two CWs is relatively large. Therefore, in ranging using the CW set for ranging, ranging accuracy is relatively high though a repeating distance is relatively short. FIG.6shows a relationship between a distance R and θdetwhen a left side of Equation (7) described above is θdet. A solid line inFIG.6shows an example of the case of using the CW set for ranging, and a broken line shows an example of the case of using the CW set for repeating correction. Note that though a sum θdetof detected phase differences calculated by Equation (7) above can take a value other than values between −π (rad) and π (rad), a sum θdetof detected phase differences shown inFIG.6has been converted to be between −π (rad) and π (rad). This is because a phase angle is generally indicated within a range of [−π (rad), π (rad)]. As shown by the solid line inFIG.6, since a distance change relative to a change in a sum θdetof detected phase differences is small when the CW set for ranging is used, it is seen that high ranging accuracy can be obtained. If a sum θdet0of detected phase differences is obtained in the case of using the CW set for ranging, R1, R2and R3exist as candidates for a distance of a ranging result, as shown inFIG.6. A relationship between a sum of detected phase differences obtained using the CW set for repeating correction and a distance is shown by a broken line inFIG.6. The broken line inFIG.6shows that a repeating distance is relatively long. In order to select a correct distance as a ranging result from among R1, R2and R3, a distance close to a distance obtained from the sum of the detected phase differences obtained using the CW set for repeating correction from among the distances may be selected. For example, if θdet1is detected using the CW set for repeating correction, it can be judged that the distance R2obtained using the CW set for ranging is the correct ranging result. Thus, a set of CWs of two waves in the same channel is used for repeating correction of a ranging result. Note that though an example in which only one set is used as the CW set for ranging is shown inFIG.6, a plurality of sets may be used. Further, as the CW set for ranging, a set of a CW of a predetermined channel corresponding to continuous 1s and a CW of another channel corresponding to continuous 0s may be adopted, or a set of CWs of mutually different channels, both of CWs corresponding to continuous 1s or continuous 0s may be adopted. While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel devices and methods described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modification as would fall within the scope and spirit of the inventions.
36,198
11860265
All the figures are schematic, not necessarily to scale, and generally only show parts which are necessary to elucidate example embodiments, wherein other parts may be omitted or merely suggested. DETAILED DESCRIPTION Example embodiments will now be described more fully hereinafter with reference to the accompanying drawings. That which is encompassed by the claims may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided by way of example. Furthermore, like numbers refer to the same or similar elements or components throughout. Generally, a radar (radio detection and ranging) uses electromagnetic waves to determine spatial information of objects, like range, angle, and velocity. These systems have been used in the past, mainly for space and defense applications. Due to the technological progress of recent years, radars are going to be used in the future in other applications like building automation, drones, HVAC (heating, ventilation, and air conditioning), smart home accessories such as smart speakers, etc. In these applications, the radars are used for continuous occupancy awareness, distance measurement, people tracking, and behavioral classification. Radars have several advantages when compared to traditional sensors like cameras, ultra-sonic, and passive infrared sensors (PIR). In contrast to cameras, they better preserve privacy, and they are robust against varying ambient lighting conditions. Radar can detect smaller movements over a longer distance when compared to standard PIR. Therefore, the disadvantage of PIR can be overcome, based on people's micro motions (e.g., vital signs such as breathing and heart rate) as a radar can detect one or multiple still sitting people in a room. Vital sign detection can be used for the health monitoring of drivers and elderly people. By detecting tiny motions, emergencies like apnea, irregular heart rate, or sudden infant death can be identified. Radars can be used in a contactless manner; thus, no interaction with the people is needed. For tracking, detection, and counting of people, a radar with a spatial (e.g., range and angle) and speed resolution are needed. In general, an angular resolution is achieved by a beam steering of the emitted or received electromagnetic signal. Therefore, different principles are known and used. One principle is the MIMO (multiple-input multiple-output) radar, where multiple transmitters are emitting signals via multiple antennas. These are reflected by an object and received by multiple antennas and their respective receivers. Algorithms do the beam steering in the digital post-processing. Another principle is the phased array antenna, which also uses multiple receive and transmit antennas. A complex radio frequency front-end, mainly based on phase shifters, is used. In this case, the beam steering is done electrically by phase-shifting the feeding lines of the antennas. The previously mentioned principles are costly and power-consuming because multiple front ends and multiple antennas are required. In cost and power-sensitive applications, radars that use SISO (single-input single-output) become more interesting because the required number of components is minimized. The usage of just one transmitter and one receiver reduces the power consumption of the analog front-end and the required chip size. Therefore, this principle creates the opportunity to deploy radars in a high amount and long-term battery powered for big scale applications. The potential disadvantage with the SISO radar is its ability for direction-finding. Most antennas have a fixed radiation pattern; thus, no spatial diversity can be provided. An alternative is physically rotating the antenna such that the main beam illuminates different angular sectors. Algorithms for super-resolution direction finding for such systems are available, but moving mechanical components are unappreciated due to its mechanical complexity and scanning speed. In order to avoid these drawbacks, the disclosure employs a frequency scanning antenna (FSA), which can steer the beam direction electrically by varying the excitation frequency. In this context, it is noted that two antennas can be used, with a diplexer, whereas a reduction to one antenna is still possible and described within the scope of the disclosure. With the aid of the disclosure, one can detect and track moving objects. For this reason, joint estimation algorithms for extracting range, angle, and Doppler (velocity) simultaneously are used. The disclosure addresses and can mitigate three difficulties, which are described in the following. The first difficulty is the development of a signal processing framework for joint angle and range-velocity estimation. Radars have existed for almost one century. Therefore, a systematic way of implementing radar algorithms has been established. Typically, radar raw data are organized in multidimensional matrixes called a radar cube or a radar data cube. For this data format a wide variety of algorithms are already available. Accordingly, generating such a radar cube makes the teachings of the disclosure particularly useful because any further processing can be done with existing algorithms. Thus, the second difficulty to overcome is the creation of a radar cube or a radar data cube in order to provide a generic interface for higher-level applications. Furthermore, the third difficulty to increase the angular and range resolution of a SISO radar with an FSA, for example achieved by the algorithm of the signal processing chain. This can be implemented with a SISO radar with an FSA and can be achieved through the usage of overlapping angular windows in the signal processing chain. Additionally or alternatively, another difficulty is to reduce the computational effort of the angular estimation through finding the (e.g., optimum) tradeoff between angular and range resolution of a SISO radar with a FSA, for example achieved with respect to the corresponding signal processing chain. It is noted that adjustments may be useful for a signal processing chain implementation into dedicated hardware blocks of an SoC (system on chip), for instance, to unload a microprocessor and to accelerate the data execution. Now, with respect to the figures, the working principle of a SISO radar with an FSA10is depicted inFIG.1. The beam is rotating from the starting angle θsto the end angle θeand illuminating a target II such as a person. The complete angular coverage area is divided into n several angular sectors S0-Sn. A sawtooth FMCW (frequency modulated continuous wave) chirp with normalized power s(t) can be expressed as: s⁡(t)=cos⁡(2⁢π⁢⁢fc⁢t+2⁢π⁢μ2⁢t2),0<t<Tc,(1) wherein fcdenotes the starting carrier frequency, wherein μ denotes the frequency slope, and wherein Tcdenotes the chirp duration. The antenna radiation pattern of the FSA is modeled as a frequency-dependent function A(θ, f) and shown inFIG.2. Frequency f is a linear function of time t: f(t)=fc+μt.(2) Thus, the radiation pattern effectively is a function of time, as A(θ, t). To establish the following equations, it is assumed that the targets are point targets, and there is no background reflection. Starting from a single target case, wherein the target is d meters away, moving at v m/s along the radial direction and at θtdegrees, then the received signal is: r(t)=aLA2(θt,t)cos(2πfc(t−τ)+πμ(t−τ)2+2πfdt).  (3) In this context, αLdenotes the loss term including gains in processing chain and path loss, τ=2d/c denotes the time delay due to the round trip, fd=2v/λ denotes the Doppler frequency, and λ denotes the wavelength. It is noted that the square is there for radiation pattern because we use the same antenna for transmission and reception. As a result, the amplitude of the received signal is maximized when the direction of the main beam at f=fc+μt is at an angle θt. During the other periods in the chirp, the target is illuminated by side lobes, thus yielding low amplitude in the receiver. As a result, as the main beam of the FSA scans different angles, the variation of the respective received amplitude during a chirp may be seen. The scanned angle that corresponds to the maximum in the received amplitude may be the target's angle of arrival (AoA). The normalized chirp is shown inFIG.3A. It is noted that the transmitted signal does not have a uniform amplitude, because the antenna gain varies over frequency (seeFIG.2). But in practice, the amplitude variation in indoor environments may not be visible because background reflections are mainly dominant. The final received signal would be a superposition of multiple similar waveforms with different angles of arrival, phase, and magnitude. The objects in the background also give strong reflection back to the receiver. As a result, the raw amplitude of received chirps can generally not provide angular information. However, velocity information should allow for separate moving targets and static environment. The portions of a chirp at the vicinity of angles at which targets are located comprise the targets' range and velocity information while the other portions typically do not. Firstly, these portions of chirp that comprise targets' information are extracted by applying window functions. These window functions are called in the following angular windows, which also suppresses the frequency leakage of the FFT (Fast Fourier Transform). It is noted that the choice (e.g., type or length) of the angular window is not limited to what is described below. As already illustrated byFIG.1, the complete coverage area is divided into several angular sectors S0. . . Sn, wherein each Snis associated to an angular window Wn. As a starting point, the following definition of Wnis used; later in this document, variations thereof may be introduced. The n-th angular window function Wnis this defined by the radiation pattern of the respective FSA in the azimuth plane: Wn(θ)=A2(θ,tn),θs<θ<θe,  (4) wherein θsand θeare the starting and ending angles scanned by the FSA. At tnwhich corresponds to the instantaneous carrier frequency fn, the antenna pattern is steered at an angle θn. The window is scaled proportionally to fit between θsand θe. For the simulation results inFIG.3AorFIG.3Ba point target is considered, and only inFIG.3Athe window is centered at the AoA of the respective target. After windowing, the portion that comprises the target information is preserved. If the angular window Wiof the corresponding angular sector Siis applied on a target, which is in another angular sector Sj, the target will be suppressed, as depicted inFIG.3B. The range-Doppler processing of two cases will exhibit different magnitudes. The angular window that matches the actual AoA will yield the strongest range-Doppler magnitude. Furthermore,FIG.4AandFIG.4Bshow the respective range-Doppler profile after applying the angular windows inFIG.3AorFIG.3B, respectively. Applying a selection of N angular windows that are centered at discrete different angles gives N copies of the time domain signal with different portions of a chirp extracted. The height of the range-Doppler peak corresponding to the target varies according to the alignment between the angular window center angle and the target's AoA. If the respective vector is extracted from the radar cube that corresponds to the target's range-Doppler bin, a variation as a function of angle may be seen. This variation is called angle profile. The angle profile of the example shown above is illustrated byFIG.5, wherein the windows are separated by about 2.5 deg from each other. A clear peak is observed at 30 deg, which is exactly the target's AoA. As already discussed above, there are three major difficulties. One of these difficulties comprises establishing a signal processing chain of the SISO radar with an FSA. In accordance withFIG.6, a signal processing chain for generating the radar cube is depicted. On the top ofFIG.6, there are the raw data arranged in a two-dimensional matrix60, which represents a coherent processing interval. This slice60is copied n times into the three-dimensional raw data cube61. After this step, the angular windows Wn(θ), the range (1.) FFTs, the frequency leakage windows WL(n) and the Doppler (2.) FFTs are applied on every slice. Generally, as an alternative thereto, it might be useful if with respect to the raw data slice, at least one of a window function, for example an angular window function, a first frequency transform, for example a frequency transform regarding range, a leakage window function, for example a frequency leakage window function, a second frequency transform, for example a frequency transform regarding Doppler, or any combination thereof is applied. Another of the above-mentioned major three difficulties comprise creating the radar cube62or radar data cube. Especially due to the generation of the raw data cube61and applying the above-mentioned steps on every slice, the result is by its own nature a radar cube such as the radar cube62with range, Doppler, and angular sector. The difference is that for a MIMO radar, the third dimension may represent antenna elements that are separated in space. It is further noted that the radar cube generation described above can be enhanced for implementation in the digital hardware of an SoC. In this context, the first step, copying the raw data slice and creating a raw data cube, can be skipped; the radar cube can be directly calculated based on the raw data slice. For the sake of completeness, it is noted that the raw data cube could require more memory. Thus, more physical space on the silicon of an SoC and more energy may be needed. Furthermore, the data copying process may require more energy. In the following, a method is described for deriving angular windows from the radiation pattern of the FSA. In this context, it is noted that there may be a tradeoff between angular resolution and range resolution. The general cause of the major three difficulties may be that narrow angular windows are resulting in a lower effective bandwidth BW of the FMCW chirp, which is analyzed by the signal processing chain. Therefore, the target will be virtually illuminated by less BW and the range resolution dres increases and is coarser. With the speed of light c=299792458 m/s, the range resolution can be calculated as. dres=c2⁢B.(5) For increasing the angular and range resolution at the same time in order to address the above-mentioned difficulty, overlapping instead of into exact areas dividing windows should be used. The angular windows are in an overlapping manner as illustrated byFIG.7. The method having multiple implementation benefits comprises at least one of the following steps. The method may comprise the step of defining the required number n of angular sectors Sn(seeFIG.1). In this context, it is noted that it might be beneficial if the minimum requirement for the number n of angular sector is: n=θFSA/θHPBW,  (6) wherein θFSAdenotes the respective angular coverage range of the FSA (seeFIG.2; −60 deg to 60 deg), and wherein θHPBWdenotes the beam width of the FSA at θ=0 deg (seeFIG.2; similar to the middle antenna patterns). In this context, it is noted that θHPBWmay be the 3 dB-, 5 dB-, or another beam width. In addition to this, a requirement for n, to avoid degradation in angular resolution, may be: n≥2·θFSA/θHPBW.  (7) It is further noted that it might be beneficial if the number of angular sectors is equal to the number angular windows. Furthermore, the method may comprise the step of retrieving the FSA radiation pattern A(θm,tn) based on the mid-angles θmof each Sn. Moreover, the method may comprise the step of creating a new angular window function based on standard window functions (for instance, a Hann window, a Hamming window, etc., or a combination thereof) and aligning it with the FSA radiation pattern A(θm,tn) in position (θm, tn), beam width θHPBWand height (Â(θm,tn)). Generally, as an alternative, it is noted that it might be beneficial if the alignment is done on the basis of at least one of the foregoing list or any combination thereof. Furthermore, examples of angular windows are depicted inFIG.7. These are derived from the antenna pattern ofFIG.2. In this context, a 3 dB beam width has been chosen for the alignment. For the first step of the above-mentioned method, the following tradeoffs may be important, when the number n of angular windows Wnor sectors Snwill be defined. Typically, the more angular windows are defined, the higher the computational effort for the signal processing chain will be. In the second step of the above-mentioned method, the radiation patterns A(θm,tn) of the FSA should be generated. Based on the number n of the angular sectors Snand the complete angular coverage area (θsto θe, seeFIG.1), the mid-angles θmof each Snare defined. With θm, the radiation patterns A(θm,tn) of the FSA are generated via an antenna simulation. In the third step, as angular windows Wn(θ), there may be used window functions like Hann, Hamming, Blackman Harris, etc., or any combination thereof. This may have the benefit of having full control over the frequency leakage instead of just using the antenna pattern A(θm,tn). Therefore, the angular windows should be aligned in position(θ)=Â(θm,tn) and in beam width Wn(θHPBW)=A(θHPBW,tn). The alignment at the 3 dB-, 5 dB-, or another beam width determines the width of the resulting angular window and thus the amount of overlapping between the angular windows. A corresponding example is illustrated byFIG.8showing one angular window5I overlaid with the appropriate antenna pattern52. In this context, the antenna pattern derives an angular window at the 45 Deg beam direction. Therefore, the 3 dB beam width is chosen. In the following, a signal processing method into an SoC is described. In this context, it is noted that the approach with angular windows has multiple potential benefits. In general, a lot of unused data can be discarded. This reduces the total number of mathematical operations, which saves production costs for the implementation of less digital calculation and memory blocks. Furthermore, less mathematical operations are consuming less energy. In the present example, the complete angular range (−60 deg≥θ≥60 deg) is internally sampled in the radar by an ADC (analog-to-digital converter) and converted in a 1024 element long data vector. After this raw data is multiplied with an angular window (seeFIG.8), about 75% of the resulting data is filled with zeros. This result can be treated as a zero-padded signal. The number of frequency bins increases but not the range resolution. Therefore, the zeros can be removed, and the number of mathematical operations can be reduced. The SISO radar with FSA and the introduced signal processing method can create a low cost and low power principle, which provides range, angle, and Doppler information. The SISO RADAR, as well as the FSA, has in comparison to MIMO and phased array antenna principles low complexity and thus less components. It is further noted the disclosure may be used for or in the context of smart home, smart building, health, HVAC (heating, ventilation, and air conditioning), drone applications, or any combination thereof. With respect to smart home applications, it is noted that the disclosure may be used in a smart assistant for people classification. With respect to smart building applications, a control based on occupancy detection may exemplarily be realized with the aid of the disclosure. In addition to this or as an alternative, a dynamic employee guidance for flexible workspaces can be realized. Health applications, driver monitoring, and/or elderly people surveillance can be realized with the aid of the disclosure. With respect to HVAC applications, the disclosure can be used on smart demand control, for instance, based on occupancy detection of meeting rooms or the like. With respect to drone applications, a remote navigation for guided landing can be realized with the aid of the disclosure. Now, with respect toFIG.9, a flow chart of an embodiment of the method for extracting spatial resolution and/or velocity resolution of a SISO radar acquiring raw radar data with a frequency scanning antenna is shown. In a first step100, a radar beam is steered with the aid of the frequency scanning antenna with respect to an area to be illuminated by the radar. Then, in a second step101, the area is divided into at least two angular sectors. In this context, the at least two angular sectors are configured in a manner that the at least two angular sectors overlap each other. It might be beneficial if for steering the radar beam with the aid of the frequency scanning antenna, the method comprises the step of varying the respective excitation frequency. It is further noted that the method may comprise the step of generating a raw data slice comprising the raw radar data on the basis of the at least two angular sectors. It is noted that the raw data slice may comprise or be a two-dimensional matrix. In addition to this or as an alternative, the raw data slice may represent a respective coherent processing interval. Further additionally or further alternatively, the raw data slice may comprise or be a slow-time and fast-time matrix. Furthermore, the method may comprise the step of generating a raw data cube on the basis of the raw data slice by copying the raw data slice at least once and stacking the raw data slice and the at least one copy thereof together in order to form the raw data cube. Moreover, it might be beneficial if the method comprises the step of generating a radar data cube on the basis of the raw data slice or the raw data cube by applying at least one window function and/or at least one frequency transform, for example at least one Fast Fourier Transform, to the raw data slice or the raw data cube. Additionally or alternatively, the method may comprise the step of applying at least one angular window function to at least one, for example each, of the at least two angular sectors such that the corresponding angular window function matching the respective angle of arrival of a corresponding radar target yields the strongest respective magnitude in the raw radar data. In this context, it might be beneficial if the method comprises the step of deriving the at least one angular window function from the corresponding radiation pattern of the frequency scanning antenna. Finally,FIG.10shows a block diagram of a device12for extracting spatial resolution and/or velocity resolution of a SISO radar acquiring raw radar data with a frequency scanning antenna10. According toFIG.10, the device12comprises an interface13being connectable to the SISO radar with the frequency scanning antenna10, and a control unit14connected to the interface13. In this context, the control unit14is configured to steer a radar beam with the aid of the frequency scanning antenna with respect to an area to be illuminated by the radar. Furthermore, the control unit14is configured to divide the area into at least two angular sectors. In addition to this, the at least two angular sectors are configured in a manner that the at least two angular sectors overlap with respect to each other. It is noted that it might be beneficial if for steering the radar beam with the aid of the frequency scanning antenna, the control unit14is configured to vary the respective excitation frequency. It is further noted that the control unit14may be configured to generate a raw data slice comprising the raw radar data on the basis of the at least two angular sectors. With respect to the raw data slice, it is noted that the raw data slice may comprise or be a two-dimensional matrix. Additionally or alternatively, the raw data slice may represent a respective coherent processing interval. In further addition to this or as a further alternative, the raw data slice may comprise or be a slow-time and fast-time matrix. Furthermore, the control unit14may additionally or alternatively be configured to generate a raw data cube on the basis of the raw data slice by copying the raw data slice at least once and stacking the raw data slice and the at least one copy thereof together in order to form the raw data cube. Moreover, it might be beneficial if the control unit14is configured to generate a radar data cube on the basis of the raw data slice or the raw data cube by applying at least one window function and/or at least one frequency transform, for example at least one Fast Fourier Transform, to the raw data slice or the raw data cube. In addition to this or as an alternative, the control unit14may be configured to apply at least one angular window function to at least one, for example each, of the at least two angular sectors such that the corresponding angular window function matching the respective angle of arrival of a corresponding radar target yields the strongest respective magnitude in the raw radar data. In this context, it might be beneficial if the control unit14is configured to derive the at least one angular window function from the corresponding radiation pattern of the frequency scanning antenna. While various embodiments of the present disclosure have been described above, it should be understood that they have been presented by way of example only, and not limitation. Numerous changes to the disclosed embodiments can be made in accordance with the disclosure herein without departing from the spirit or scope of the disclosure. Thus, the breadth and scope of the present disclosure should not be limited by any of the above described embodiments. Although the disclosure has been illustrated and described with respect to one or more implementations, equivalent alterations and modifications will occur to others skilled in the art upon the reading and understanding of this specification and the annexed drawings. In addition, while a particular feature of the disclosure may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application. While some embodiments have been illustrated and described in detail in the appended drawings and the foregoing description, such illustration and description are to be considered illustrative and not restrictive. Other variations to the disclosed embodiments can be understood and effected in practicing the claims, from a study of the drawings, the disclosure, and the appended claims. The mere fact that certain measures or features are recited in mutually different dependent claims does not indicate that a combination of these measures or features cannot be used. Any reference signs in the claims should not be construed as limiting the scope.
27,105
11860266
DETAILED DESCRIPTION FIG.1is a block diagram of a radar detection system100according to an embodiment of the invention. The radar detection system100may detect a sign of a life, and may be a frequency-modulated continuous wave (FMCW) radar. The life may be a human being, a pet or others. The sign of life may be breath, heartbeats or others. Since breaths or heartbeats results from expansion and contraction of a heart, a chest or other body parts at a predetermined frequency, the radar detection system100may determine a movement feature of the a target object140, so as to determine whether the movement feature matches a pattern of the sign of life, thereby determining whether the target object140is a life. The radar detection system100may transmit a transmission signal St and receive an echo signal Se reflected from the target object140, and determine whether the target object140is a life according to the echo signal Se. The radar detection system100may include antennas110,120, a transmitter112, a signal generator114, a receiver122, an analog-to-digital converter (ADC)124, a preprocessing circuit126and a processor130. The antenna110, the transmitter112, the signal generator114and the processor130may be sequentially coupled to each other. The antenna120, the receiver122, the ADC124, the preprocessing circuit126and the processor130may be sequentially coupled to each other. The processor130may generate a baseband signal for a frequency-modulated continuous wave signal by controlling the signal generator114via a control signal Sct. The transmitter112may convert the frequency-modulated continuous wave signal into a transmission signal St to a predetermined frequency band (e.g., 6 GHz), and then the antenna110may transmit the transmission signal St. The frequency-modulated continuous wave may be a triangular wave, a saw-toothed wave, a staircase wave, a sinusoidal wave or other shapes of waves. The receiver122may receive the echo signal Se via the antenna120, and mix the echo signal Se and a signal associated with the transmission signal St, e.g., the transmission signal St to generate a beat signal. The beat signal carries beat information indicative of a half of a difference between the frequency of the echo signal Se and the frequency of the transmission signal St. The echo signal Se may include an in-phase component and a quadrature component at each point in time, and the beat signal may include a corresponding in-phase component I and a corresponding quadrature component Q at each point in time. The receiver122may mix the beat signal with two orthogonal oscillating signals to obtain the in-phase component I and the quadrature component Q of the beat signal. The ADC124may set a predetermined sampling frequency, e.g., 44 kHz to be the sampling frequency, and sample the in-phase component I and the quadrature component Q of the beat signal to generate a digitized in-phase component and a digitized quadrature component. The preprocessing circuit126may perform a preprocessing procedure on the digitized in-phase component and the digitized quadrature component to generate preprocessed in-phase component I′ and preprocessed quadrature component Q′. The preprocessing procedure may include filtering out a high frequency noise, reducing a sampling frequency, removing a direct current component, and a combination thereof. The preprocessing circuit126may include a low-pass filter, a decimator, an average circuit, an adder and a combination thereof. The low-pass filter may remove high frequency components from the digitized in-phase component and the digitized quadrature component to generate filtered in-phase component and filtered quadrature component. The decimator may reduce the quantity of data, e.g., reduce the filtered in-phase components and the filtered quadrature components at44ksamples per second by a factor of 80 to generate downsampled in-phase components and quadrature components at 550 samples per second. The downsampled data may reduce computations of subsequent signal processing, preventing signal distortion and false detection of a life owing to the filter being unable to process a large quantity of data in the subsequent filtering process. The direct current components in the downsampled in-phase components and the downsampled quadrature components may be obtained by averaging the downsampled in-phase components and the downsampled quadrature components over a period of time, respectively. The average circuit may compute the averages of the downsampled in-phase components and downsampled quadrature components, e.g., compute 128-data moving averages to generate the average of the downsampled in-phase components and downsampled quadrature components. The adder may remove the average of the downsampled in-phase components from the downsampled in-phase component to generate the preprocessed in-phase component I′, remove the average of the downsampled quadrature components from the downsampled quadrature component to generate the preprocessed quadrature component Q′, thereby simplifying the subsequence complex signal demodulation process and preventing the subsequence complex signal demodulation process from being affected by the direct current offset. In some embodiments, the preprocessing procedure may be implemented by software or a combination of software and hardware. In the software implementation, the processor130may store the software in a memory of the radar detection system100and load the software from the memory to execute the preprocessing process. The processor130may detect the sign of life according to the preprocessed in-phase component I′ and the preprocessed quadrature component Q′, and generate an output signal So to indicate whether a life is detected.FIG.2is a block diagram of the processor130. The processor130may include complex signal demodulation (CSD) unit131, a window function unit132, a first time-domain-to-frequency-domain transform unit133, a combining unit134, a second time-domain-to-frequency-domain transform unit135and a sign-of-life detection unit136, the units being sequentially coupled to each other. Each of the units may be implemented by software, hardware or a combination thereof. The complex signal demodulation unit131may construct complex conjugate data v, v* according to the in-phase component I′ and the quadrature component Q′, e.g., v=I′+Q′, v*=I′−Q′. In some embodiments, v=Q′±I′, v*=Q′−I′. The window function unit132may employ a window function to divide the complex conjugate data v, v* using a fixed period of time to generate M time intervals of complex conjugate data v, v*, each time interval of complex conjugate data v, v* including N pairs of complex conjugate data v(m,n), v*(m,n), m, n being positive integers, 1≤m≤M1≤n≤N. The window function may have a fixed length, and may be a rectangular window function, a Hamming window function, a Hanning window function or other types of window functions. For example, the window function unit132may employ the window function to divide the complex data v at a fixed length of 64 pieces of data, the complex data v(2,32) representing the 32th piece of complex data in the second time interval. The first time-domain-to-frequency-domain transform unit133may perform a first time-domain-to-frequency-domain transform on the complex data v(m,n) to generate a positive velocity energy Vp(m,p) corresponding to the pth positive velocity in the mth time interval, p being a positive integer, 1≤p≤P, and a positive velocity energy Vp(2,32) representing the 32th positive velocity energy in the second time interval. Similarly, the first time-domain-to-frequency-domain transform unit133may perform the first time-domain-to-frequency-domain transform on the complex data v*(m,n) to generate a negative velocity energy Vn(m,p) corresponding to the pth negative velocity in the mth time interval. The positive velocity energy Vp(m,p) and the negative velocity energy Vn(m,p) may be the energies corresponding to a positive velocity (e.g., representing the target object140moves towards the radar detection system100) and a negative velocity (e.g., representing the target object140moves away from the radar detection system100) respectively. The first time-domain-to-frequency-domain transform may be implemented by a short-time Fourier transform, a wavelet transform, a Hilbert-Huang Transform, or a combination thereof. In some embodiments, P=N, the first time-domain-to-frequency-domain transform unit133may output positive velocity energies Vp(1,1) to Vp(M,N) and negative velocity energies Vn(1,1) to Vn(M,N) for subsequent use. The positive velocity energies Vp(1,1) to Vp(M,N) and the negative velocity energies Vn(1,1) to Vn(M,N) may be referred to as Doppler spectrogram data. The processor130may plot a Doppler spectrogram according to the Doppler spectrogram data, as shown inFIG.3, in which the horizontal axis represents velocity, and the vertical axis represents time, and the colors represent energies corresponding to the velocities. The Doppler spectrogram shows that the velocity of the target object140oscillates substantially between +1 m/s and −1 m/s. The combining unit134may perform a combination operation according to the positive velocity energies Vp(m,p) and the negative velocity energies Vn(m,p) to generate combined Doppler spectrogram data c(m). In some embodiments, the combining unit134may perform a linear combination on the positive velocity energies Vp(m,1) to Vp(m,P) and the negative velocity energies Vn(m,1) to Vn(m,P) to generate the combined Doppler spectrogram data c(m). For example, the combining unit134may accumulate the positive velocity energies Vp(m,1) to Vp(m,P) and the negative velocity energies Vn(m,1) to Vn(m,P) to generate the combined Doppler spectrogram data c(m). In other embodiments, the combining unit134may generate the combined Doppler spectrogram data c(m) according to an extremum (e.g., an absolute value of a maximum energy) of the positive velocity energies Vp(m,1) to Vp(m,P) and the negative velocity energies Vn(m,1) to Vn(m,P) in the mth time interval. For example, the combining unit134may determine the maximum of the positive velocity energies Vp(m,1) to Vp(m,P) and the negative velocity energies Vn(m,1) to Vn(m,P) in the mth time interval, and set the maximum as the combined Doppler spectrogram data c(m). In some embodiments, the combining unit134may enhance the positive velocity energies Vp(m,p) and the negative velocity energies Vn(m,p) to generate enhanced positive velocity energies and enhanced negative velocity energies, and generate the combined Doppler spectrogram data c(m) according to the enhanced positive velocity energies and enhanced negative velocity energies. For example, the combining unit134may apply a non-linear function, e.g., a logarithmic function by a filter or other signal processing methods to the positive velocity energies Vp(m,p) and the negative velocity energies Vn(m,p) to generate the enhanced positive velocity energies log(Vp(m,p)) and enhanced negative velocity energies log(Vn(m,p)). In some embodiments, the combining unit134may perform a linear combination on the enhanced positive velocity energies and the enhanced negative velocity energies to generate the combined Doppler spectrogram data c(m). For example, the combining unit134may accumulate the enhanced positive velocity energies log(Vp(m,1)) to log(Vp(m,P)) and the enhanced negative velocity energies log(Vn(m,1)) to log(Vn(m,P)) to generate the combined Doppler spectrogram data c(m). In other embodiments, the combining unit134may generate the combined Doppler spectrogram data c(m) according to an extremum (e.g., an absolute value of a maximum energy) of the enhanced positive velocity energies log(Vp(m,1)) to log(Vp(m,P)) and the enhanced negative velocity energies log(Vn(m,1)) to log(Vn(m,P)) in the mth time interval. For example, the combining unit134may determine the maximum of the enhanced positive velocity energies log(Vp(m,1)) to log(Vp(m,P)) and the enhanced negative velocity energies log(Vn(m,1)) to log(Vn(m,P)) in the mth time interval, and set the maximum as the combined Doppler spectrogram data c(m). In some embodiments, the combining unit134may normalize the positive velocity energies Vp(m,p) and the negative velocity energies Vn(m,p) to generate normalized positive velocity energies and normalized negative velocity energies, and generate the combined Doppler spectrogram data c(m) according to the normalized positive velocity energies and normalized negative velocity energies. For example, the combining unit134may distribute all the positive velocity energies Vp(1,1) to Vp(M,P) between a predetermined positive velocity energy range in a proportional manner to generate the normalized positive velocity energies Vp_norm(1,1) to Vp_norm(M,P), and distribute all the negative velocity energies Vn(1,1) to Vn(M,P) between a predetermined negative velocity energy range in a proportional manner to generate the normalized negative velocity energies Vn_norm(1,1) to Vn_norm(M,P). The positive velocity energies may range between 0 and a predetermined maximum, and the negative velocity energies may range between 0 and a predetermined minimum. In some embodiments, the combining unit134may perform a linear combination on the normalized positive velocity energies Vp_norm(m,1) to Vp_norm(m,P) and the normalized negative velocity energies Vn_norm(m,1) to Vn_norm(m,P) to generate the combined Doppler spectrogram data c(m). For example, the combining unit134may accumulate the normalized positive velocity energies Vp_norm(m,1) to Vp_norm(m,P) and the normalized negative velocity energies Vn_norm(m,1) to Vn_norm(m,P) in the mth time interval to generate the combined Doppler spectrogram data c(m). The processor130may generate data corresponding to the combined Doppler spectrogram according to the combined Doppler spectrogram data c(1) to c(M). The combined Doppler spectrogram may be plotted, as shown inFIG.4, in which the horizontal axis represents time, and the vertical axis represents the combined Doppler spectrogram data c(m). The combined Doppler spectrogram shows that the combined energy of the target object140oscillates substantially between the maximum of the positive velocity energy range and the minimum of the negative velocity energy range (that is, the maximum of the absolute values of the negative velocity energies). In some embodiments, after the positive velocity energies Vp(m,p) and the negative velocity energies Vn(m,p) are enhanced and/or normalized, the combining unit134may filter the enhanced and/or normalized positive velocity energies and enhanced and/or normalized negative velocity energies using a bandpass filter to generate filtered positive velocity energies and negative velocity energies, and accumulate the filtered positive velocity energies and negative velocity energies in the mth time interval to generate the combined Doppler spectrogram data c(m). The predetermined velocity range may be, for example, between +1 m/s and −1 m/s. In some embodiments, the combining unit134may filter out components in the combined Doppler spectrogram data c(m) outside a predetermined frequency range using another bandpass filter or low-pass filter. The predetermined frequency range may be configured according to a normal heart rate or a normal respiratory rate, e.g., the normal heart rate of an adult ranges substantially between 60 and 100 beats per minute, and the normal respiratory rate of an adult ranges substantially between 12 and 20 breaths per minute. The second time-domain-to-frequency-domain transform unit135may perform a second time-domain-to-frequency-domain transform on the combined Doppler spectrogram data c(m) to generate spectrum data C(k), k being a positive integer, 1≤k≤K. The spectrum data C(k) represents an energy at a kth frequency band, e.g., spectrum data C(2) represents the energy at the second frequency band. The second time-domain-to-frequency-domain transform may be implemented by a discrete Fourier transform or a fast Fourier transform. In some embodiments, K=M, the second time-domain-to-frequency-domain transform unit135may output spectrum data C(1) to C(M). The processor130may plot a spectrum diagram according to the spectrum data C(1) to C(M), as shown inFIG.5, the horizontal axis representing frequency, and the vertical axis represent energy. The spectrum diagram shows that the spectrum data C(1) to C(M) of the target object140peak at 0.75 Hz, 1.5 Hz and 3 Hz. The sign-of-life detection unit136may determine whether a life is detected according to the spectrum data C(1) to C(M). When a local maximum of the spectrum data C(1) to C(M) is within a sign-of-life range, the sign-of-life detection unit136may determine that a life is detected. When all local maxima of the spectrum data C(1) to C(M) are outside the sign-of-life range, the sign-of-life detection unit136may determine that the life is not detected. The sign-of-life range may be configured according to the normal heart rate, e.g., between 1 Hz and 2 Hz. The sign-of-life range may be configured according to the normal respiratory rate, e.g., between 0.2 Hz and 0.4 Hz. The sign-of-life detection unit136may output a detection result of the life as an output signal So to an output device of the radar detection system100such as a monitor, a printer or a speaker, or to a data storage device such as a hard drive. The radar detection system100may generate complex conjugate data according to in-phase components and quadrature components of an echo signal to generate positive velocity energies and negative velocity energies of a target object and detect expansion and contraction movements of a living object, thereby determining whether the target object is a life in an accurate and quick manner. FIG.6is a flowchart of a method600of detecting a life according to an embodiment of the invention. The method600may be adopted by the radar detection system100, and may include Steps S602to S614. Steps S602to S609are used to generate Doppler spectrogram data. Steps S610to S614are used to determine whether a life is detected according to the Doppler spectrogram data. Any reasonable step change or adjustment is within the scope of the disclosure. Steps S602to S614are explained using the radar detection system100: Step S602: The receiver122receives the echo signal Se; Step S604: The preprocessing circuit126performs a preprocessing procedure on the echo signal Se to generate the in-phase components I′ and the quadrature components Q′ of the preprocessed signal; Step S606: The complex signal demodulation unit131generates the complex conjugate data v, v* according to the in-phase components I′ and the quadrature components Q′ of the preprocessed signal; Step S608: The window function unit132divides the complex conjugate data v, v* using the window function to generate the complex conjugate data v(m,n), v*(m,n); Step S609: The first time-domain-to-frequency-domain transform unit133performs the first time-domain-to-frequency-domain transform on the complex data v(m,n), v*(m,n) to generate the positive velocity energy Vp(m,p) and the negative velocity energy Vn(m,p); Step S610: The combining unit134generates the combined Doppler spectrogram data c(m) according to the positive velocity energies Vp(m,p) and the negative velocity energies Vn(m,p); Step S612: The second time-domain-to-frequency-domain transform unit135performs the second time-domain-to-frequency-domain transform on the combined Doppler spectrogram data c(m) to generate spectrum data C(k); Step S614: The sign-of-life detection unit136determines whether a life is detected according to the spectrum data C(k). The explanation for Steps S602to S614is provided in the preceding paragraphs, and will be omitted here for brevity. The method600may generate complex conjugate data according to in-phase components and quadrature components of an echo signal to generate positive velocity energies and negative velocity energies of a target object and detect expansion and contraction movements of a living object, thereby determining whether the target object is a life in an accurate and quick manner. Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention. Accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims.
20,582
11860267
DESCRIPTION OF THE INVENTION EMBODIMENTS The following description of the invention embodiments of the invention is not intended to limit the invention to these invention embodiments, but rather to enable any person skilled in the art to make and use this invention. 1. Virtual Aperture Array (VAA) Radar Tracking As discussed in the background section, traditional array-based radar systems are limited: angular resolution depends both on the number of elements in the receiver array and the angle between the array and the target: θr⁢e⁢s⁢o⁢l⁢u⁢t⁢i⁢o⁢n≈λN⁢d⁢cos⁢θ where N is the number of elements in the array and d is the distance separating them. Here, the number of array elements (and distance separating them) relates to the receiver's aperture; that is, more elements (or increased element spacing) results in increased receiver aperture. As the angular resolution formula makes clear, to increase angular resolution (without changing carrier frequency), one must increase the receiver's aperture. Typically, this is done by adding receiver array elements or increasing the separation distance between elements; however, these techniques increase either or both of the receiver array's physical size or its cost and physical complexity. Nevertheless, this traditional technique shines in that it increases radar resolution with relatively little change in processing latency. As an alternative to this traditional technique, synthetic aperture radar (SAR) was created. In SAR, a moving antenna (or antenna array) captures multiple signals sequentially as it moves, as shown inFIG.2A; these signals are then combined (using knowledge of the antenna's movement) to simulate the effect of a larger antenna, as shown inFIG.2B. SAR manages to simulate increased radar aperture (thus increasing radar resolution), but requires precise antenna motion data and generally entails a significant increase in processing latency. Both requirements are problematic in many applications. A novel technique, hereafter referred to as Virtual Aperture Array (VAA) radar tracking, was created to simulate increased radar aperture (as SAR does) without incurring the additional cost/size of increasing physical array size or the heavy downsides of SAR (e.g., motion data requirements and high processing latency). This technique was first introduced in parent U.S. patent application Ser. No. 15/883,372. Note that while the term “virtual aperture” has various uses in the field of radar tracking, as used in the present application, Virtual Aperture Array radar tracking specifically refers to the tracking techniques described herein (and not to any unrelated technology sharing the term). The VAA radar tracking technique functions by capturing instances of a first signal at a physical array simultaneously (like a traditional phased array), then capturing instances of a second signal at the same physical array (the instances of the second signal captured simultaneously, but not necessarily at the same time as the instances of the first signal are captured); if applicable, capturing additional instances in the same manner, and finally processing the data received from all captured instances together to generate a higher-resolution radar tracking solution than would otherwise be possible. Notably, the first and second signals (as well as any additional signals) are encoded with distinct phase information. This distinct phase information enables the instances of the second signal to be treated as being received at a virtual receiver array displaced from the physical array (creating a virtual aperture larger than the physical aperture). For example, a first signal may be captured as shown inFIG.3A, having a first phase encoding, and a second signal may be captured as shown inFIG.3B, having a second phase encoding; these signals may be processed together as shown inFIG.3C. As shown inFIG.4A, when a reflected signal is received from a target at an angle (i.e., not normal to) the six-element radar array, the signal received at each receiver element in the array is phase shifted relative to the signal received at other elements in the array, as shown inFIG.4B. From the phase shift and spacing between elements, the angle of the target to the array may be determined. As shown inFIG.5A, VAA can simulate the same aperture with only three elements through the use of two phase shifted signals, resulting in the signals at receiver elements as shown inFIG.5B(note that the signal at RX1at t2is similar to the signal at RX4inFIG.4B, and so on). The positioning of the “virtual elements” is dependent on the phase shift between the first and second signals. 2. Method for Interpolated Virtual Aperture Array Radar Tracking A method100for interpolated virtual aperture array (IVAA) radar tracking includes transmitting a set of probe signals S110, receiving a set of reflected probe signals S120, and calculating initial tracking parameters from the set of reflected probe signals S130, as shown inFIG.6. The method100may additionally include refining the initial tracking parameters S140and/or modifying probe signal characteristics S150. While the original VAA technique is a powerful one (especially given that it can work well with small transmit and receive arrays), as the dimensions of the virtual array increase, so does error in the system. This is because each additional virtual element is essentially an extrapolation of the physical array. The present application is directed to a novel technique that builds upon aspects of the original VAA tracking, but does so within the framework of an interpolated sparse physical array (bounding the error that occurs from the addition of virtual array elements). For example, as shown inFIG.7A, a (sparsely spaced) two-transmitter, three-receiver array; the receive array can receive probe signals from both transmitters and using VAA can process the signals as shown inFIG.7B, increasing aperture and thus angular resolution. By incorporating interpolation, angular resolution can be further increased, as shown inFIG.7C. Further examples of interpolation are as shown inFIGS.7D-7F(note that in these examples, while a first pair of physical receiver elements may be spaced by some distance sub-half-wavelength, additional elements may be spaced farther) and7G-7H (note here that while previous examples are given with respect to a 1D array, it is understood that this technique can be expanded to two or three dimensions). The performance of IVAA in such an implementation approaches that of a physical array while requiring a much smaller number of array elements, but IVAA's flexible nature can provide further advantages. As described in later sections, IVAA may utilize an FOV-detection-vector based approach to target identification, which can provide high angular resolution across a wide field-of-view (FOV) without the downsides of traditional beam-steering. This technique is hereafter referred to as “Parallel FOV Detection”. Note that like VAA and IVAA, the term “parallel FOV detection” specifically refers to the detection technique described in later sections (and not to any unrelated technology sharing the term). Further, IVAA may itself utilize transmit and/or receive phase modification to further increase FOV. The method100is preferably implemented by a system for IVAA radar tracking (e.g., the system200), but may additionally or alternatively be implemented using any suitable object tracking system capable of performing virtual aperture array object tracking (e.g., SONAR, LIDAR). S110includes transmitting a set of probe signals. S110functions to transmit a set of signals that, after reflection by a target, can provide information about the target (e.g., relative location, velocity, etc.). S110preferably includes transmitting frequency shift keyed (FSK) RADAR signals or frequency-modified continuous wave (FMCW) RADAR signals, but S110may include transmitting any signal satisfying these constraints; e.g., an electromagnetic signal (as in radio waves in RADAR, infrared/visible/UV waves in LIDAR), a sound signal (as in SONAR). S110preferably includes transmitting at least two distinct probe signals. The set of probe signals in S110preferably satisfy two constraints: each of the set is distinct in phase (as measured from some reference point) and each of the set is distinguishable from other others upon reception. The distinction in phase enables the effective increase of aperture (and thus of angular resolution), while distinguishability ensures that upon reception, signal data is appropriately processed given the distinction in phase. S110may accomplish phase distinction in several manners. For example, S110may include transmitting probe signals from physically distinct antenna elements. For a target at an angle from the transmitter elements, the separation encodes an inherent phase difference (one that is dependent on the angle!), as shown inFIG.8. For two transmitters separated by a distance dTX, the phase difference at a target at θ from normal is approximately d⁢⁢ϕ=2⁢πλ⁢dT⁢X⁢sin⁢θ and the phase difference seen at the receiver is approximately the same. As a second example, S110may include transmitting probe signals at different times from the same antenna element(s), but with different phase information. For example, S110may include transmitting a first signal from an antenna element at a first time, and then transmitting a second phase shifted signal from the same antenna element at a second time. Note that this is not equivalent to the phase difference in the first example; the phase difference dϕ (between the first and second signal) seen at a target is (approximately) constant and independent of the target's angle. Also note that while this phase distinction results in the simulation of increased receiver elements, it also results in the simulation of increased transmitter elements, as shown inFIG.9. The result of this is that while phase distinction is generated by antenna element separation, the size of the virtual aperture is roughly the same for all target angles; in the explicit phase shifting example, the size of the virtual aperture is target-angle dependent. For example, in the transmitter separation case, the array shift can be written as da⁢r⁢r⁢a⁢y=d⁢ϕ⁢λ2⁢π⁢1sin⁢θ=dT⁢X while in the explicit phase shifting case da⁢r⁢r⁢a⁢y=d⁢ϕ⁢λ2⁢π⁢1sin⁢θ where dϕ is a constant (and thus darrayis target angle dependent). While S110preferably performs explicit phase shifting with a phase shifter (i.e., a device for which phase shift is ideally independent of frequency) S110may additionally or alternatively perform explicit phase shifting using delay lines (or any other device for which phase shift depends on frequency) and/or any combination of time delays and phase shifters. S110may additionally or alternatively include combining phase shifting techniques (e.g., using multiple transmitters separated by a distance and phase-shifting the transmitters relative to one another). Note that while examples are given with time-constant phase shifts, S110may additionally or alternatively include modulating phase over time, by physically shifting transmitters (i.e., giving dTXtime dependence) and/or by adding phase dϕ where the phase is a function of time. The phase of the transmitted signal over time is referred to as the phase function. Phase functions may be referenced to any points. For example, if first and second antenna elements (separated by a non-zero distance) produce identical first and second signals respectively, it can be said that the phase function of the first signal (referenced to the first transmitter) is identical to the phase function of the second signal (referenced to the second transmitter). However, the phase of these two signals after reflection by a target at an angle from the transmitter array is not seen as identical at the target (or at the receiver array). S110may additionally or alternatively include modulating phase with respect to angle (e.g., by using a steerable or directional antenna and modulating phase while sweeping the antenna, using an antenna array and modulating phase for different elements of the array, etc.). S110may also accomplish signal distinguishability in any of several manners. As previously mentioned, one way in which S110may enable signal distinguishability is by time-duplexing signals (e.g., transmitting a first frequency chirp signal with a first phase encoding, then a second signal with a second phase encoding); however, S110may additionally or alternatively make signals distinguishable by frequency duplexing signals (e.g., transmitting a first frequency chirp signal within a first frequency band and transmitting a second frequency chirp signal within a second frequency band non-overlapping with the first), or by encoding the signals (e.g., using a distinct frequency modulation or amplitude modulation technique to distinguish a signal from others). S110may additionally or alternatively accomplish signal distinguishability in any manner. S120includes receiving a set of reflected probe signals. S120functions to receive data resulting from the reflection of the probe signal transmitted in S110. S120preferably includes measuring phase, magnitude, and frequency information from reflected probe signals, but S120may additionally or alternatively include measuring any available characteristics of the reflected probe signals. S120preferably includes measuring any data necessary to recover signal identification information (i.e., information to determine which signal of the transmitted set the reflected probe signal corresponds to). S130includes calculating initial tracking parameters from the set of reflected probe signals. S130functions to calculate a set of tracking parameters that identify at least a position of the target relative to the radar receiver; additionally or alternatively, tracking parameters may include additional parameters relevant to object tracking (e.g., target velocity, target acceleration). Note that S130may include calculating more tracking parameters for a given target than necessary to achieve a position solution; for example, as described later, while only range, azimuth angle, and elevation angle may be necessary to calculate object position, composite angle may also be calculated and used to refine and/or check azimuth/elevation angle calculations. Further, while S130primarily includes calculating tracking parameters from the reflected probe signals, S130may additionally or alternatively calculate or otherwise receive parameters relevant to object tracking (e.g., radar egomotion velocity) that are not calculated using the probe signal. Parameters used to establish target position may be defined in any coordinate system and base. In the present application, target position is preferably represented in a Cartesian coordinate system with the origin at the radar (e.g., x,y,z represents target position) or a spherical coordinate system with the same origin, wherein position is defined by range (R), azimuth (α), and elevation (θ); alternatively, target position may be described in any manner. Note that elevation (and similarly azimuth) is an example of an angle between a reference vector and a projected target vector; the projected target vector is the vector between the observer (e.g., the radar) and the target, projected into a reference plane (the reference plane containing the reference vector). The method100may include calculating any such angles. While, as previously mentioned, any parameters relevant to object tracking may be calculated in S130, some additional parameters that may be calculated include target range rate (dR/dt, typically calculated from Doppler data), relative target velocity (the velocity of the target with respect to the radar receiver), radar egomotion velocity (referred to in this application as egovelocity, the velocity of the radar receiver relative to a stationary position). These may be related; for example, range rate is equivalent to relative target velocity multiplied by the cosine of the looking angle between the radar and the target. S130may additionally or alternatively include calculating composite angle (β, the angle between the target and the radar: β=arccos [cos α×cos θ], see alsoFIG.10). While composite angle may be derived from elevation and azimuth (or vice versa), it may also be calculated from Doppler data. If, for example, elevation and azimuth are calculated from a first data source (e.g., phase differences between receivers in a receiver array) and composite angle is calculated from a second data source (e.g., Doppler frequency shift and relative velocity), composite angle can be used alongside elevation and azimuth to produce a more accurate solution. S130may include calculating tracking parameters from any suitable data source. For example, operating on a radar system with a horizontal receiver array, azimuth may be calculated based on phase differences between the reflected probe signal seen by each receiver in the array. Likewise, elevation may be calculated in a similar manner by a vertical receiver array (and/or elevation and azimuth may be calculated in similar manners by a two-dimensional receiver array). Range, for example, may be calculated based on travel time of a probe signal. Range rate, for example, may be calculated instantaneously (e.g., using Doppler frequency shift data) or over time (e.g., by measuring change in range over time). Composite angle, as previously discussed, may be derived from elevation/azimuth or calculated explicitly from Doppler data: fD≈Kv cos β; K=2⁢f0c. S130may additionally include calculating relative target velocity in any manner. For example, S130may include determining that a target is stationary and calculating relative target velocity based on egovelocity (i.e., in this case, relative target velocity is egovelocity). A target may be determined as stationary in any manner; for example, by identifying the target visually as a stationary target (e.g., a stop sign may be identified by its appearance), by identifying the target by its radar cross-section as a stationary target (e.g., a stop sign or a road may be identified by shape or other features), by comparing Doppler data to other (e.g., phase) data (e.g., if the composite angle provided by Doppler data is substantially different from the composite angle derived from elevation and azimuth, that may be a moving target), by the size of the target, or in any other manner. Likewise, egovelocity may be determined in any manner (e.g., a GPS receiver or IMU coupled to the position of the radar receiver, external tracking systems, etc.). As another example, S130may include receiving relative target velocity information based on external data; e.g., an estimate from a visual tracking system coupled to the position of the radar receiver. Relative target velocity information may even be provided by an external tracking system or the target itself (e.g., transmissions of IMU data from a target vehicle). To determine Doppler frequency shift, S130may include converting reflected signal data to the frequency domain using a Fast Fourier Transform (or any other technique to convert time domain signals to frequency domain for analysis). S130may also improve system performance by using a Sliding Fast Fourier transform (SFFT) or similar techniques such as the Sliding Discrete Fourier Transform (SDFT) and Short-time Fourier Transform (STFT). These techniques allow Fourier transforms for successive samples in a sample stream to be computed with substantially lower computational overhead, improving performance. S130preferably includes calculating initial tracking parameters from two or more reflected probe signals by first linking signal instances to receiver elements S131and generating interpolated signal instances S132. From the linked instances (including those generated via interpolation), S130includes calculating the tracking parameters. S130may then include calculating tracking parameters by performing beamforming (S133) and/or by performing parallel FOV detection (S134). S131includes linking signal instances to receiver elements. S131functions to correspond signal instances received at a given receiver element to a real or virtual receiver element. For example, a radar system that time-duplexes first (zero-phase) and second (phase-shifted) signals may correspond a signal instance received at a physical receiver element either to that receiver element (if the reflected signal is the first signal) or to a shifted virtual receiver element (if the reflected signal is the second signal). Note that while in some cases the translation of virtual receiver elements is independent of target angle, in cases where the translation of virtual receiver elements depends upon target angle, it may be required to preliminarily determine target angle first (in order to know the position of virtual receiver elements) using one or more subsets of received signals (each subset corresponding to one of the unique transmitted signals) independently prior to using all received signals jointly. Alternatively stated, the virtual elements may be described in terms of the physical elements by an element translation function; if this translation function is not already known (as in the case of separated transmitters) S131may include determining the element translation function for a given target. S132includes generating interpolated signal instances. S132functions to generate additional signal instances from those captured, where these additional signal instances correspond to additional virtual receiver elements positioned between other receiver elements (either real or virtual). For example, if signal instances are linked to physical receiver elements at positions {0,d,2d,3d} and virtual receiver elements at {10d,11d,12d,13d} in S131, S132may include generating additional signal instances corresponding to virtual receiver elements at {4d,5d, . . . ,8d,9d}. S132may use any technique for generating these interpolated signal instances. In one embodiment, S132includes generating linear combinations of phase modulated codes (transmitted by transmitters of the ranging system) to simulate signal components as would be expected and/or predicted across interpolated receiver elements. S133includes performing beamforming across receiver elements. Once data has been linked to real or virtual receiver element positions, S133functions to calculate object tracking data (e.g., target range and angle) using beamforming techniques. Beamforming techniques that may be used by S133include but are not limited to conventional (i.e., Bartlett) beamforming, Minimum Variance Distortionless Response (MVDR, also referred to as Capon) beamforming, Multiple Signal Classification (MUSIC) beamforming, or any other beamforming technique. S133preferably includes performing digital beamforming for a given object-tracking element array using every element (both real and virtual) in the array, but S133may additionally or alternatively use any subset of elements to perform angle calculations. In some embodiments, S133may include dynamically selecting the receiver elements used to perform digital beamforming techniques (e.g., based on receiver noise or any other relevant factor). S134includes performing parallel FOV detection. In parallel FOV detection, signals from receiver element pairs, each corresponding to a different field of view, are analyzed in parallel to determine angle-to-target. For example, consider an array with n elements {e1, . . . , en} (for example, as shown inFIG.11A). n−1 pairs can be made with the first element: {e12, . . . , e1n}. Each pair has an associated FOV given by: FOVi=2⁢sin-1⁢λ2⁢(i-1)⁢d Where d is the interelement spacing. Note that while this formula assumes a regular interelement spacing, it is understood that elements need not be spaced regularly (and even without regular interelement spacing, the basic relationship that FOV is inverse to the distance between the 1stand ith elements holds). The FOV of the system of the whole (i.e., the widest FOV) is the FOV of the first two elements: F⁢O⁢V2=2⁢sin-1⁢λ2⁢d In a traditional phased-array radar system, the angular resolution of such an array is δ⁢α≈λNd⁢⁢cos⁢⁢α. Note here that as angle moves away from center angle α=0, resolution decreases. For example, while δα❘(α=0)≈λNd (the resolution at the center angle), δα❘(α=sin-1⁢λ2⁢d)≈λN⁢d2-λ24→∞⁡(for⁢⁢d=λ2). This is why beamforming is often performed for such arrays—by steering the center angle across the FOV, a high angular resolution can be achieved (but this requires that phase be modified over time to accomplish beamsteering). In parallel FOV detection, instead of beamsteering across a wide FOV to preserve angular resolution, FOV detection vectors are generated for multiple FOVs. For example, as shown inFIG.11B, consider two targets (target 1 and target 2). Target 1 exists in the third-narrowest FOV (width⁢⁢of⁢⁢FOVn-2=2⁢sin-1⁢λ2⁢(n-3)⁢d) and every wider FOV {FOVn-2, . . . FOV2}, while Target 2 exists in all FOVs {FOVn, . . . FOV2}. By performing target detection on a set of FOVs in parallel, FOV detection vectors for each detected target can be generated. For example, Target 1 (at angle θ1) might be associated with an FOV detection vector that looks like {θ1, . . . , θ1, x, x}. The first series of θ1s represent that Target 1 has been detected at θ1by each of the element pairs e12. . . e1(n-2)while the x's show non-detects at {e1(n-1), e1n}. Likewise, Target 2 might be associated with an FOV detection vector that looks like {θ2, . . . , θ2, θ2, θ2}. Stated alternatively, FOV detection vectors may be calculated for pairs of the superset of radar array elements comprising the physical elements of the array as well as first and second sets of virtual elements (corresponding to virtual elements generated from phase-shifting and interpolation respectively). Notably, at wider angles, angular resolution is poor (as described above). However, the difference in angle between FOVs is relatively small. For example, imagine an array with 2λ element spacing and ten elements. The FOVs are as follows: {29°, 14.4°, 9.6°, 7.2°, 5.7°, 4.8°, 4.1°, 3.6°, 3.2° }. An element is detected in FOV4. . . FOV10(i.e., within 0±4.8°. The difference between FOV3and FOV4in magnitude is 2.4°. At this angle, the angular resolution for a traditional array (without performing beamsteering, using only three elements) would be 9.6°. (We only use three elements because a 4+ element array with this spacing would have an FOV narrower than the region the target is in). Likewise, a traditional array with beamsteering achieves a resolution of 2.8°. The takeaway here is that parallel FOV detection can achieve accuracy comparable to that of beamsteering (without actually needing to perform the time-intensive phase modulation required to perform beamsteering). Thus, S134preferably includes generating FOV detection vectors for detected targets, and determining angles-to-target from the FOV detection vectors. Each detection vector preferably contains an entry for each FOV window (corresponding to each possible pair of a reference receiver element and all other receiver elements) corresponding to whether or not a target was detected (and/or values usable to indicate the same, such as detection probability magnitudes and/or the calculated angle to target from that receiver pair); additionally or alternatively, FOV detection vectors may contain any information relevant to determining target angle. FOV detection across FOVs preferably occurs simultaneously but may additionally or alternatively occur sequentially or in any manner. Note that the above examples are given with respect to a single transmit signal. When multiple transmit signals are used (e.g., via time multiplexing or via multiple transmitter elements), detection vectors may include data for each transmit signal. Notably, because the transmit elements may themselves be in an array (physical, virtual, or otherwise), the use of multiple transmit signals may further increase the angular resolution of the method100(i.e., the transmit signals themselves form “fields of view”). S140includes refining the initial tracking parameters. S140functions to generate a more accurate tracking solution than that initially calculated by S130. In a first example implementation, S140includes running a Kalman filter on Cartesian coordinates of a target generated from elevation angle or azimuth angle (determined from phase information), range, and composite angle, constrained by error bounds of the composite angle. In a second example implementation, S140includes running a Kalman filter on Cartesian coordinates of a target generated from elevation angle and azimuth angle (determined from phase information), range, and composite angle constrained by error bounds of the composite angle. S140may additionally or alternatively include filtering, refining, and/or constraining tracking parameters in any manner. S150includes modifying probe signal characteristics. S150functions to modify characteristics of the transmitted probe signals (at either or both of transmitter and receiver elements) to ensure high performance of the radar tracking algorithm. One of the advantages of the method100is that virtual transmitter/receiver elements can be added (and the virtual aperture expanded) or removed at will. Adding more virtual elements increases the potential accuracy of object tracking performed by the method100, but also increases the latency of object tracking. S150may include modifying probe signal characteristics based on the output of S130; for example, if during object tracking it is detected that a first set of data (corresponding to an earlier-transmitted signal and real receivers, for example) and a second set of data (corresponding to a later-transmitted signal and virtual receivers) fail to converge upon an object tracking solution within some threshold error bounds, S150may include modifying the transmitted signal to reduce the number of virtual elements (e.g., reducing the number of distinct phase-encoded signals from three to two). S150may alternatively include modifying probe signal characteristics based on other data. For example, S150may include modifying probe signal data based on radar array motion (e.g., the speed of an automobile for a car-mounted radar); modifying transmission to increase virtual aperture when the car is moving more slowly and modifying transmission to decrease virtual aperture when the car is moving more quickly. S150may additionally or alternatively include modifying probe signal characteristics (at either transmitter or receiver) in any manner. In one implementation of an invention embodiment, S150includes performing beamsteering on one or both of transmit and receive signals. In contrast to the beamforming described previously for traditional linear radar arrays (where a narrow beam is scanned across a wide and static FOV, as shown inFIG.12A), the beamsteering of S150functions to shift the center angle of all FOVs, as shown inFIG.12B. Beamsteering is preferably performed by modifying the phase of transmit signals either at transmit elements or at receive elements, but may additionally or alternatively be performed in any manner. Beamsteering may be used to further increase angular resolution (by scanning the entire FOV2with a known deflection angle while detected targets cross FOV boundaries, detection accuracy/resolution can be improved). Note that because the spacing may be larger than λ/2 between array elements, aliasing may occur, as shown inFIG.13. In such cases, S150may include steering or otherwise modifying signals (at transmitter and/or receiver) to aid in the rejection of aliases. For example, transmitter FOVs may be scanned independently of receiver FOVs, removing the symmetry otherwise preventing the detection of the true target over aliases. For example, if a transmit array is scanned such that a null of the transmit pattern falls on the alias, the target will still show up (as shown inFIG.14A), whereas if the null falls on the real target, the target will not have a transmit signal to reflect, as shown inFIG.14B. 2. System for Interpolated Virtual Aperture Array Radar Tracking A system200for interpolated virtual aperture array (IVAA) radar tracking includes a transmitter array210, a horizontal receiver array220, and a signal processor240, as shown inFIG.15. The system200may additionally include a vertical receiver array230and/or a velocity sensing module250. Further, the system200may include any number of virtual transmitters211and/or virtual receiver elements222/232, as shown inFIG.16(while not explicitly shown here, it is understood that such virtual receiver elements may also include interpolated elements as described in the method100). Similarly to the method100, the system200utilizes IVAA radar tracking to simulate increased radar aperture (as SAR does) without incurring the additional cost/size of increasing physical array size or the heavy downsides of SAR (e.g., motion data requirements and high processing latency). The IVAA radar tracking technique of the system200functions by capturing instances of a first signal at a physical array simultaneously (like a traditional phased array), then capturing instances of a second signal at the same physical array (the instances of the second signal captured simultaneously, but not necessarily at the same time as the instances of the first signal are captured); if applicable, capturing additional instances in the same manner, and finally processing the data received from all captured instances together to generate a higher-resolution radar tracking solution than would otherwise be possible. Notably, the first and second signals (as well as any additional signals) are encoded with distinct phase information. This distinct phase information enables the instances of the second signal to be treated as being received at a virtual receiver array displaced from the physical array (creating a virtual aperture larger than the physical aperture). For example, a first signal may be captured as shown inFIG.4A, having a first phase encoding, and a second signal may be captured as shown inFIG.4B, having a second phase encoding; these signals may be processed together as shown inFIG.4C. The transmitters210function to transmit a signal that, after reflection by a target, can provide information about the target (e.g., relative location, velocity, etc.). The transmitter210preferably transmits a frequency shift keyed (FSK) RADAR signal or a frequency-modified continuous wave (FMCW) RADAR signal, but the transmitter210may transmit any signal satisfying these constraints; e.g., an electromagnetic signal (as in radio waves in RADAR, infrared/visible/UV waves in LIDAR), a sound signal (as in SONAR). The transmitter210preferably has multiple transmitting elements (e.g., a transmit array), but may additionally or alternatively have a single transmitting element (e.g., a transmit antenna). If the transmitter210has multiple elements, these elements may include a single transmitter paired to multiple antennas (e.g., spaced in a particular pattern and/or with antennas coupled to phase/time delays); multiple transmitters, each paired to a single antenna; multiple transmitters paired to multiple antennas, or any other configuration. For example, a transmitter210may include transmitter elements spaced by a distances substantially greater (e.g., >3×) the distance between receiver elements. Likewise, transmitter arrays may be oriented in any manner relative to receiver arrays. In addition to the transmitter210, the system200may additionally include any number of virtual transmitters211. As described in the section of the method100, virtual transmitters are created by phase-shifting the output of one or more real transmitters210and may correspond to a translated element of the transmitter210. The horizontal receiver array220functions to receive data resulting from the reflection of the probe signal(s) transmitted by the transmitter210. The horizontal receiver array220preferably measures phase, magnitude, and frequency information from reflected probe signals, but the horizontal receiver array220may additionally or alternatively measure any available characteristics of the reflected probe signals. From data received from the horizontal receiver array220, tracking parameters relating to a tracking target may be calculated. The horizontal receiver array220is preferably used to determine azimuth (a), as shown inFIG.9, but parameters used to establish target position may be defined in any coordinate system and base, and the horizontal receiver array220may be used to determine any relevant tracking parameters. In the present application, target position is preferably represented in a Cartesian coordinate system with the origin at the radar (e.g., x,y,z represents target position) or a spherical coordinate system with the same origin, wherein position is defined by range (R), azimuth (α), and elevation (θ); alternatively, target position may be described in any manner. Note that elevation (and similarly azimuth) is an example of an angle between a reference vector and a projected target vector; the projected target vector is the vector between the observer (e.g., the radar) and the target, projected into a reference plane (the reference plane containing the reference vector). The system100may calculate any such angles. The horizontal receiver array220includes a set of receiver elements221arranged in a pattern; e.g., along a horizontal axis. The set of receiver elements221may include a single receiver paired to multiple antennas (e.g., spaced in a particular pattern and/or with antennas coupled to phase/time delays); multiple receivers, each paired to a single antenna; multiple receivers paired to multiple antennas, or any other configuration. The horizontal receiver array220may additionally include any number of virtual receiver elements222. As described in the section of the method100, virtual receiver elements222are created in response to the phase-shifting of output of one or more real transmitters210(or by interpolation) and may correspond to a translated receiver element221of the horizontal receiver array220. The horizontal receiver array220preferably is used to calculate angles from phase information, but may additionally or alternatively be used to calculate angles in any manner (e.g., using horizontal component of Doppler frequency shift). The vertical receiver array230is preferably substantially similar to the horizontal receiver array220, except that the vertical receiver array is arranged upon an axis not parallel to the axis of the horizontal receiver array (e.g., a vertical axis). The vertical receiver array230is preferably used to calculate elevation, but may additionally or alternatively be used to calculate any tracking parameters. The vertical receiver array230includes a number of receiver elements231and may additionally include any number of virtual receiver elements232. As described in the section of the method100, virtual receiver elements232are created in response to the phase-shifting of output of one or more real transmitters210and may correspond to a translated receiver element231of the vertical receiver array230. The signal processor240functions to calculate tracking parameters from data collected by the horizontal receiver array220, the vertical receiver array230, and/or the velocity sensing module250. The signal processor240preferably includes a microprocessor or microcontroller that calculates tracking parameters according to the method100; additionally or alternatively, the signal processor240may calculate tracking parameters in any manner. The signal processor240may additionally or alternatively be used to communicate with an external computer (e.g., to offload computations, receive additional data, or for any other reason). The signal processor240may also control configuration of the components of the system200or any calculations or actions performed by the system200. For example, the signal processor240may be used to control creation and/or other parameters of virtual transmitters or virtual array elements as described in the section of the method100. The velocity sensing module250functions to determine the velocity of the system200(or components of the system200, or an object coupled to the system200). The velocity sensing module is preferably a communications interface that couples to an inertial measurement unit (IMU), but may additionally or alternatively be any communications interface (e.g., Wi-Fi, Ethernet, ODB-II) or sensor (accelerometer, wheel speed sensor, IMU) capable of determining a speed and/or velocity. The methods of the preferred embodiment and variations thereof can be embodied and/or implemented at least in part as a machine configured to receive a computer-readable medium storing computer-readable instruction. The instructions are preferably executed by computer-executable components preferably integrated with a system for IVAA radar tracking. The computer-readable medium can be stored on any suitable computer-readable media such as RAMs, ROMs, flash memory, EEPROMs, optical devices (CD or DVD), hard drives, floppy drives, or any suitable device. The computer-executable component is preferably a general or application specific processor, but any suitable dedicated hardware or hardware/firmware combination device can alternatively or additionally execute the instructions. As a person skilled in the art will recognize from the previous detailed description and from the figures and claims, modifications and changes can be made to the preferred embodiments of the invention without departing from the scope of this invention defined in the following claims.
42,129
11860268
DETAILED DESCRIPTION OF EMBODIMENTS The present disclosure describes various embodiments of systems and methods for controlling operation of a control system of a location. More generally, Applicant has recognized and appreciated that it would be beneficial to control operation of control system of a location based on a number of occupants in a location. A particular goal of certain embodiments of the present disclosure is to accurately determine a number of occupants in a location to increase the efficiency and/or effectiveness of a control system for that location. In view of the foregoing, various embodiments and implementations are directed to a system and method for determining a number of occupants in a location and operating a control system of the location in response to the number of occupants. The disclosed system may include both a motion detector subsystem including one or more motion sensors, such as a lighting system having one or more embedded PIR sensors, and a radiofrequency (RF) subsystem including one or more RF transceivers, such as a network router. Data gathered by the RF transceivers is used to generate a first occupant estimate with a first algorithm and the data gathered by the motion sensors is used to generate a second occupant estimate with a second algorithm. The estimates produced by the two sensor modalities are fused to produce an accurate count of occupants at a location. The first and second algorithms can be trained by using the data and/or estimate related to each subsystem as an input to the algorithm associated with the other subsystem, thereby further improving their respective accuracies over time. Accurate occupant estimates can be used to operate a control system of the location, such as to provide better or more efficient lighting, temperature, ventilation, and space optimization, thereby maximizing the energy efficiency and occupant comfort of the building. Referring toFIG.1, in one embodiment, a system100is provided to determine a number of occupants (e.g., number of people) within a location102using multiple modalities. The operation of certain functions or features of the location can be controlled in response to the determined number of occupants. The system100includes a radiofrequency (RF) subsystem designated herein with the reference numeral104, and a motion detector subsystem designated herein with the reference numeral106. As will be described in more detail below, the RF subsystem104and the motion detector subsystem106are together used to determine the number of occupants in the location102. The term “occupant” may be used herein interchangeably with “individual” and these terms are intended to refer primarily to people, but it is to be appreciated that these terms could alternatively in some embodiments refer to animals, insects, etc., or even non-living entities that move in, out, and/or about an environment (e.g., due to wind, water currents, etc.). InFIG.1, the location102is illustrated as an office space having desks, workstations, conference rooms, etc., but it is to be appreciated that any other area, indoor or outdoor, could be monitored. The system100may include a control system105, or more specifically, if the location102is a building (e.g., office space), the control system105may be referred to as a building control system. For example, the control system105may be, or include, a heating ventilation and air conditioning (HVAC) system, a sound masking system, a lighting system, a security system, or any other system or functionality useful to the location102. The RF subsystem104includes one or more transceivers capable of transmitting and receiving radiofrequency (RF) waves. By transceiver it is meant any device, or combination of devices (e.g., a separate transmitter and receiver) capable of transmitting and receiving RF waves. InFIG.1, the position of four such transceivers are indicated by the reference characters A, B, C, and D. In one embodiment, the transceivers of the RF subsystem104are, or include, Wi-Fi enabled routers. It is to be appreciated that other radiofrequency-based communication or signal generating and receiving systems could be implemented via any combination of relevant hardware and/or software known or developed in the art. It is to be appreciated that any RF-based detection technology could be used for the subsystem104. For example, RF waves have been used in the art to identify the movement of individuals based on a transceiver, such as a smartphone, held by the individuals. It has also been found that RF waves can be used to track people throughout a location based on the reflections of the RF waves transmitted and then received by a transceiver, as discussed in more detail below. Advantageously, RF transceivers in the form of networked Wi-Fi routers are pervasive in many buildings and are thus well suited to form the RF subsystem104in many common environments. The motion detector subsystem106inFIG.1includes sixty-five motion sensors designated with the numerals1through65in that figure. “Motion sensors” as used herein refers to any device or technology that detects objects or movement of objects within a direct line of sight or field of vision of the sensors. It is to be appreciated that motion can be determined based on various parameters detected by the sensor that are indicative of motion. For example, many common motion detectors detect motion based on sensed differences in heat between the moving object and the surrounding environment. In one embodiment, the motion sensors include passive infrared (PIR) sensors, although other motion sensors could be used, such as a camera or other sensor capable of receiving visible light signals. Lighting systems ubiquitous infrastructures in buildings and office spaces. So-called “smart” lighting systems feature one or more luminaires equipped with Light-Emitting Diodes (LEDs) or other controllable light sources, which may be connected to each other and/or other network devices via Ethernet or wireless networks. The luminaires also have PIR or other sensors for controlling operation of the lights in an energy-efficient fashion (e.g., the sensors enabling the lights to automatically turn on/off depending on whether there is detected movement). Connectivity enables the individual luminaries to work together to maximize energy efficiency and enables remote monitoring and predictive maintenance of the system. Advantageously, this type of existing lighting system, having embedded PIR or other motion sensors, can be used to form the motion detector subsystem106. Other existing systems having motion sensors, such as security systems or the like, could alternatively or additionally be utilized, or motion sensors could be deployed specifically for the purpose of forming the motion detector subsystem106. FIG.2illustrates on example of a motion sensor-enabled device in the form of a ceiling-mounted light fixture (or luminaire)106ahaving an embedded PIR sensor that enables the light fixture106ato turn on/off depending on detected motion. A lighting system could include one or more of the light fixtures106a. The PIR sensor has a field of vision112, which generally takes a conical or pyramidal shape having a height H originating at the PIR sensor. The light fixture106aand/or other motion sensors used by the subsystem106may additionally or alternatively include the ability to distinguish between different types of movement. For example, the motion sensors may be able to distinguish between “major” (e.g., an entire body moving) and “minor” (e.g., just a limb of a body moving) movements such as via the relative detected size of the moving object and/or the detected speed of movement. Additionally, the light fixture106aor other motion sensor of the motion detector subsystem106may be able to recognize a plurality of different physical areas or zones, such as a first zone114bounded by X1and Y1and a second zone116bounded by X2and Y2inFIG.2(e.g., by using multiple sensors as is generally known in the art). The zones could be arranged in any pattern, such as a grid, concentric circles, etc. In this way, each motion sensor can define one or more individual zones of the location102. The individual zones can be combined to create more general zones that correspond to larger areas of the location102. For example, referring back toFIG.1, the location102is separated generally into four different zones indicated by dashed lines, although it is to be appreciated that the location102could be any other number of zones. In this way, the motion sensors can be used to determine not just a total number of occupants, but also the relative position, or locality, of the occupants. Additionally, this information could be used by the control system105to enable, disable, or alter functionality of its components in only specific areas (e.g., reduce the temperature in one zone while maintaining the temperature in all other zones). The system100may also include a controller110having a processor107, a memory108, and/or a communication module109. The controller110can be utilized to store the data gathered by the subsystems104and106(e.g., in the memory108) and/or to calculate the occupancy based on the gathered data (e.g., with the processor107). In one embodiment, the controller110is also used to control the components of the control system105(e.g., HVAC system). Alternatively, the control system105may include a separate controller akin to the controller110that is in communication with the controller110. As should be appreciated in view of the above-description, elements of the various systems and subsystems may be shared (e.g., the control system105may control operation of Wi-Fi enabled routers that form the subsystem104, or control operation of a lighting system, which includes PIR or other sensors that form the subsystem106). The controller110may be part of either of the subsystems104and/or106, the control system105, or separate from, but in communication with, these systems and subsystems. It is to be appreciated that the multiple controllers could be used in lieu of the single controller110, e.g., the subsystem104and the subsystem106may have separate controllers that communicate with each other. The transceivers of the subsystem104, the sensors of the motion detector subsystem106, the components of the control system105, and the controller110may communicate with or amongst each other via any wired or wireless communication technology (e.g., Bluetooth, Wi-Fi, Zigbee, Ethernet, etc.). The processor107may include any suitable form of device, mechanism, or module configured to execute software instructions such as a microcontroller, plural microcontrollers, circuitry, a single processor, or plural processors. The memory108may include any suitable form or forms, including a non-volatile memory or volatile memory. Volatile memory may include random access memory (RAM). Non-volatile memory may include read only memory (ROM), flash memory, a hard disk drive (HDD), a solid state drive (SSD), or other data storage media. The memory108may be used by the processor107for the temporary storage of data during its operation. Data and software, such as the data gathered by the subsystems104and106and the algorithms discussed below, an operating system, firmware, or other data or application may be installed or stored in the memory108. The communication module109can be or include any transmitter, receiver, antenna, radio, or other communication device, mechanism, or technology, as well as software configured to enable operation thereof. FIG.3includes a block diagram from which further aspects of the operation and structure of the system100can be appreciated. In order to determine the number of occupants, the system100may include a first algorithm118(or “RF algorithm118”), which is built and/or trained to estimate the occupancy of the location102based on a first set of data (or “RF data”) measured by the RF subsystem104(e.g., data corresponding to reflected RF waves), and a second algorithm120(or “motion detector algorithm120”), which is built and/or trained to estimate the occupancy of the location based on a second set of data (“motion data”) measured by the motion detector subsystem106(e.g., data corresponding to detected movement in the field of vision of each motion sensor). In one embodiment the first and/or second algorithms118and120are or employ the use of machine learning algorithms. It is to be appreciated that any number of machine learning systems, architectures, and/or techniques, e.g., artificial neural networks, deep learning engines, etc. could be utilized. In order to build and/or train the RF algorithm118, the layout of the location102(e.g., data describing the physical layout of the location102, such as the boundaries of different zones, the location of each desk or workstation, etc.) can be provided to the RF algorithm118as an input. Additionally, the RF algorithm118may receive as an input the location or coordinates of each of the transceivers (TX) of the RF subsystem104. Similarly, the motion detector algorithm120may receive as inputs the layout of the location102as well as the location or coordinates of the sensors of the motion detector subsystem106. The location coordinates can be provided according to any reference coordinate system. For example, if the motion sensors are embedded as part of the luminaire (e.g., as discussed with respect to the light fixture106a), this information can be determined from a commissioning database for the lighting system. The location of other notable features, such as desks, particular zones, etc. can also be set using the same coordinate system. In operation, the algorithms118can be utilized to calculate a first occupant estimate122(or RF-based estimate122) based on the RF data measured by the RF subsystem104and a second occupant estimate124(or motion-based estimate124) based on the motion data measured by the motion detector subsystem106. As discussed in more detail below, the estimates122and124can be used to help reinforce performance of the algorithms118and120by providing the RF-based estimate122to help train the motion detector algorithm120and the motion-based estimate124to help train the RF algorithm118. Additionally, as also discussed in more detail below, the estimates122and124can be fused or combined at a fusion module126to produce a final fused occupancy count or estimate. In one embodiment, the controller110includes the fusion module126, which can be implemented via software, e.g., installed in the memory108of the controller110. The controller110can be used to perform the reinforcement, e.g., via the fusion module126if the reinforcement is performed as part of the fusion process. The fused occupant estimate can be sent to a control system for the location, e.g., the control system105of the location102, to enable, disable, and/or otherwise modify the function or operation of components of the control system (e.g., increase or decrease temperature, turn on/off ventilation fans, change the intensity of a sound masking system, etc. in response to the changing numbers of occupants). As noted above, the estimates122and124, and thus the fused estimate, may correlate the occupants to different coordinates or zones, to enable the control system105to control operation separately and/or differently in each zone. It is noted that the inputs to the RF and motion detector algorithms118and120in both training and operation can additionally be developed from data from the location102and/or the subsystems104and106, depending on the particular construction of the system100. In one embodiment, the RF subsystem104is, includes, or is arranged using the structure and/or principles of the WiTrack system developed by the Massachusetts Institute of Technology. In this embodiment, the RF subsystem104would operate by transmitting an RF signal and capturing its reflections off a human body. Occupant estimates would be generated based on the received data from the reflected RF waves, as described generally below. In one non-limiting example, the RF algorithm118may take the data received by the RF subsystem104to track the motion of occupants by processing the signals from the transceivers (e.g., receiver antennas). First, the time-of-flight (TOF) can be measured as the time it takes for a signal to travel from a transceiver (e.g., transmitting antenna) of the RF subsystem104to the reflecting body, and then back to the transceiver (e.g., receiving antenna) of the RF subsystem104. An initial measurement of the TOF can be obtained using a frequency modulated carrier wave (FMCW) transmission technique. The estimate can be cleaned to eliminate multipath effects and abrupt jumps due to noise. Once the TOF is determined, as perceived from each of the transceivers (e.g., receiving antennas), the geometric placement of the transceivers (e.g., based on the coordinate inputs noted above) can be utilized to localize the moving body in three dimensions. Additionally, this type of system can be used to detect a fall by monitoring fast changes in the elevation of an individual or object and the final elevation after the change. These systems can also be used to differentiate between minor movements, such as distinguishing between motion of an arm and motion of a whole body. The algorithm120can be similarly built and used in accordance to its specific needs, e.g., to include simulations or field experiments that enable the algorithm120to correlate the sensed movement detection data of the motion detector subsystem106into an occupant estimate. In one specific non-limiting example, it can be assumed that occupancy of an area can be measured based on number of people using the space, such as via the number of desks that are occupied in an open office space. In this example, let X=x1, . . . , xNindicate “N’ motion sensors in the location (i.e., the subsystem106), and Y=y1, . . . , yMindicate “M” occupied desks (i.e., the estimated number of occupants). The motion sensors can be configured to detect or measure motion, e.g., output 1 if there is motion and 0 otherwise. In some embodiments, additional information, such as relative size or speed of the moving object could be determined. The number of sensors (N) can be large, and thereby the function approximation may not be trivial. Hence, to perform dimension reduction, the sum of triggered sensors, Bsum, can be determined as Bsum=Σi=1Nxi(t). Further, total desk occupancy, Asum, can be given by Asum(t)=Σi=1Myi(t). One of the key requirements for supervised learning algorithms (e.g., training of the algorithm120) is access to labelled data (that is, data that relates to examples considered to be true, known, or the ground truth, upon which the algorithm is based, or learns if machine learning is utilized). This requires measuring a large amount of data for {X,Y} as defined above. This can be done via actual experimentation, or by building a model that emulates the behavior in the location while being computationally tractable. This type of model may be referred to as a surrogate model. FIG.4illustrates a block diagram describing how a surrogate model128can be used to create the algorithm120according to one embodiment. The surrogate model128can be used in an “offline” or learning phase to create a mapping function (g) defining or used by the algorithm120in an “online” or operational phase. As noted above, data pertaining to the physical layout of the location102as well as the coordinates of the motion sensors of the subsystem106and the desks in the location102can be set according to the same frame of reference or global coordinate system and provided to the model. In this way, the coordinate data can be considered as a bi-partite graph wherein motion sensors and desks are two disjoint sets, with an edge between a sensor and a desk for each desk is within the sensing region (e.g., the field of vision112) of each sensor. In building the model128, it can be assumed that if movement is detected in the field of view of a motion sensor, it will translate to the sensor identifying an occupied state, e.g., the sensor output will be 1. Given this surrogate model128, data can be simulated by any desired method. In one embodiment, Monte Carlo analysis is performed by randomly simulating desk occupancy in the location102(giving known values for Asum), and subsequently using the surrogate model128to determine the number of sensors that are triggered (Bsum, as defined above). After collecting a sufficiently large amount of data, one can determine a function (g) that maps the triggered sensors (Bsum) to occupant count (Asum). One example is illustrated inFIG.5in which each dot represents a value of Bsumcalculated from different given values of true occupancy (Asum) under different conditions (e.g., occupying desks in different zones), with the function (g) being the best approximation correlating Bsumto true occupancy (Asum). It should be appreciated that instead of the surrogate model128, the function (g) could be generated by performing actual experimentations in the location by altering the true occupancy (Asum) and measuring the number of triggered sensors (Bsum). Another consideration is that it may be necessary to convert the actual or real-life motion/detection data from the motion sensors of the motion detector subsystem106to align with the surrogate model128. That is, since the surrogate model128did not consider people moving about the location, and also did not consider both major and minor movement, real life scenarios in this example may tend to overestimate the number of occupants due to the increased sensor activity. For this, the multi-level information provided by motion sensors that differentiates between the major and minor movement, as noted above with respect toFIG.2, can be exploited. Thus, it can be set or assumed that the minor movement is related to people working at their desks, and thus used to tally a value akin to Bsumused by the surrogate model128, while major movements are assumed to correspond to people transiently moving throughout the location102and thus not tallied. In this example, a pre-processing unit130is included and configured to evaluate the motion data to identify data related to both minor and major movements and to pass only the data related to minor movement to the mapping function (g) to determine occupant count. Of course, in other embodiments, it may be desirable to count both minor and major movement, or to tally only major movement while disregarding minor movement, or to process the motion data in some other manner to bring consistency between the surrogate model and the data measured by the motion detector subsystem106when in actual operation. As noted above, the RF-based estimate122and the motion-based estimate124can be fused by the fusion module126according to any data or information fusion technique. In one embodiment, let NRFand NMDrepresent the occupant estimates122and124given by the RF subsystem104and the motion detector subsystem106, respectively. The variance of the two systems can be denoted by VRFand VMDrespectively. The two occupant estimates can then be fused by the fusion module126to get the final occupancy count N by the equation: N=NMDVMD+NRFVRF1VMD+1VRF If desired, the computational error can also be analyzed by determining the probability of errors occurring each time the system100makes an occupancy determination. For example, the probability of incurring at most ‘k’ errors in a year is given by: Prob⁡(0≤k≤4)=∑k=04⁢⁢(Nk)⁢pfailk⁡(1-pfail)N-k, where N is the total number of reported estimates in a year, and pfailis the probability of undercounting occupants by some amount. For example, if it is assumed that the system100reports occupancy every hour during an eight-hour work period during each weekday, pfail=1−0.99146 denotes the current probability of undercounting the occupants by more than 10%. Such an analysis provides the minimum improvement that is needed to achieve more than 95% probability of having at most four incidents of undercounting the occupants by more than 10%. FIG. shows how the probability of failing at most four times in a year varies for different reporting frequencies as a function of pfail. This graph (and similar graphs for other reporting frequencies) can be further used to train the algorithms118and120corresponding respectively to the subsystems104and106. FIG.7illustrates a method150for operating a system (e.g., the system100) configured to estimate the occupancy of a location and control features or functionality of the location according to one embodiment disclosed herein. The method150starts at steps152and154in which a first set of data (i.e., RF data) is gathered by an RF subsystem (e.g., the RF subsystem104) and a second set of data (i.e., motion data) is gathered by one or more motion sensors (e.g., the motion sensors of the motion detector subsystem106). At a step156, a first occupant estimate (e.g., the RF-based estimate122) is made from the RF data (e.g., via the RF algorithm118), while at a step158a second occupant estimate (e.g., the motion-based estimate124) is made from the motion data (e.g., via the motion detector algorithm120). The method may then proceed to a reinforcement phase160, if desired, by proceeding from the steps156and158to steps162and164, respectively. At the step162the RF data is used as an input to train the motion detector algorithm, while at the step164the motion data is used an input to train the RF algorithm. For example, the RF data, including the RF-based estimate122, could be input as a “labelled” example or known information to the motion detector algorithm120, while the motion data, including the motion-based estimate124, could be input as a “labelled” example or known information to the RF algorithm118. Additional examples are provided below in which the data and/or estimate associated with each of the algorithms is used to recalibrate the parameters of the other algorithm. In this way, each of the different subsystems is used to train, or reinforce, the algorithm associated with other and the unique advantages of each subsystem is able to reinforce the ability of the algorithms to most accurately estimate occupancy. In one embodiment, only one of the algorithms is trained during the reinforcement phase160(e.g., either the step162or the step164). The reinforcement phase160could be performed for each iteration of the method150, or periodically over time. Since the RF data and estimate is used to train the motion detector algorithm, and the motion-based estimate and motion data is used to train the RF algorithm, the step162returns to the step154, while the step164returns to the step152. If the reinforcement phase160is not used, then the steps156and158instead proceed to a step166in which the RF estimate and the motion-based estimate are fused (e.g., via the fusion module126as discussed above). Lastly, the method150includes a step168in which a control system of the location (e.g., the control system105of the location102) is controlled in response to the fused estimate generated in the step166. The method150can repeat as often as desired to enable the control system105to actively and timely operate in response to the number of occupants in the location102. One embodiment for the reinforcement phase160of the method150can be appreciated in view of the above description andFIG.8. In this embodiment, the outputs of the two subsystems104and106are fused to enhance the accuracy of the overall occupant counting. As noted above, the motion detector algorithm120and the motion-based estimate124can be based on a relationship between the true occupancy in a location and the total number of times the motion sensors of the subsystem106are triggered. For example, this relationship can be determined by real-life experiments or in an offline training phase using Monte Carlo or other simulation as discussed above and shown inFIG.5. The analysis results in a set of (occupancy, trigger) pairs, which are plotted as dots in bothFIGS.5and8. A nonlinear function is fit to these observations to define the relationship between the occupancy and the number of sensor triggers. For example, let f(x,θ) denote the nonlinear function, where x denotes the number of sensor triggers and θ denotes one or more tunable parameters of the function f. This function f can then be used by, and/or comprise, the motion detector algorithm120. In this example, the tunable parameters θ can be improved using feedback provided by the RF subsystem104. That is, the algorithm120corresponding to the motion detector subsystem106can be reinforced by inputting the data collected by the transceivers of the RF subsystem104and/or inputting the RF-based estimate122, during training of the algorithm120. That is, the data from the two subsystems104and106can be synchronized, e.g., using timestamps, to obtain the total number of motion sensor triggers that corresponds not only to the motion-based estimate124, but also to the RF-based estimate122. This results in additional observation-pairs of the type (occupancy, triggers), where the occupancy value is not provided from simulations/experimentations, but instead from the data of the RF subsystem104(e.g., from the RF-based estimate122). These data points are denoted as Xs in the exampleFIG.8. The parameters θ can be recalibrated using this new set of data points in order to recalculate the function f, which in turn redefines the algorithm120. Moreover, these observations may be weighted differently than the results of the simulations/experiments to reflect the confidence in the estimates provided by the RF subsystem104(e.g., the estimate122generated via the data of the RF subsystem104could be weighted more or less heavily than the simulation/experiments). The recalibration may be performed periodically, or triggered due to events, such as the detected ingress of a large crowd or a scheduled event. As another embodiment for the reinforcement phase160, the data collected by the motion detector subsystem106and/or the motion-based estimate124may alternatively or additionally be used to improve the accuracy of the RF-based estimate122. For example, procedures such as Successive Silhouette Cancellation (SSC) may be employed by the RF algorithm118to overcome the aforementioned near-far problem. Again, the near-far problem arises when reflections off nearer occupants have more power than reflections off more distant occupants, thereby obfuscating the signals from the distant occupants, and frustrating the ability of the RF subsystem106to detect or track these occupants. SSC generally entails mapping the location of the nearest occupant that would have generated the TOF measurements and then cancelling this effect to recover the locations of other occupants. To this end, the known coordinates of the motion sensors of the subsystem106(and/or the field of view of the motion sensors), e.g., via commissioning information that defines the locations of the motion sensors, office layout, desks, etc., can be used to verify the location of detected occupants. For example, the known coordinates of triggered sensors may be contemporaneously reviewed by the RF subsystem104(or timestamps used to synchronize and compare the motion data to the RF data). In this way, the parameters of the algorithm118are recalibrated, thereby improving the accuracy of mapping TOFs to different locations. This enhances the overall accuracy RF-based estimate122by improving the ability of the RF algorithm118to overcome the near-far problem when determining its estimate. While several inventive embodiments have been described and illustrated herein, those of ordinary skill in the art will readily envision a variety of other means and/or structures for performing the function and/or obtaining the results and/or one or more of the advantages described herein, and each of such variations and/or modifications is deemed to be within the scope of the inventive embodiments described herein. More generally, those skilled in the art will readily appreciate that all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the inventive teachings is/are used. Those skilled in the art will recognize, or be able to ascertain using no more than routine experimentation, many equivalents to the specific inventive embodiments described herein. It is, therefore, to be understood that the foregoing embodiments are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, inventive embodiments may be practiced otherwise than as specifically described and claimed. Inventive embodiments of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein. In addition, any combination of two or more such features, systems, articles, materials, kits, and/or methods, if such features, systems, articles, materials, kits, and/or methods are not mutually inconsistent, is included within the inventive scope of the present disclosure. The phrase “and/or,” as used herein in the specification and in the claims, should be understood to mean “either or both” of the elements so conjoined, i.e., elements that are conjunctively present in some cases and disjunctively present in other cases. Multiple elements listed with “and/or” should be construed in the same fashion, i.e., “one or more” of the elements so conjoined. Other elements may optionally be present other than the elements specifically identified by the “and/or” clause, whether related or unrelated to those elements specifically identified. As used herein in the specification and in the claims, “or” should be understood to have the same meaning as “and/or” as defined above. It should also be understood that, unless clearly indicated to the contrary, in any methods claimed herein that include more than one step or act, the order of the steps or acts of the method is not necessarily limited to the order in which the steps or acts of the method are recited.
34,452
11860269
DETAILED DESCRIPTION Disclosed herein are centralized object detection sensor network systems adapted for long-baseline sensing, e.g., radar and/or lidar sensor networks, that support extended spatial and/or angular coverage. Possible applications for the disclosed systems include, but are not limited to, autonomous vehicles (e.g. ground-, air-, or sea-based), extended indoor spaces, and outdoor areas. To address these and other needs, disclosed herein is a centralized object detection sensor network system comprising a central unit configured to generate one or more probing signals for detecting one or more objects in an environment, and one or more transponders configured to receive the one or more probing signals and convert them into free space waves for detecting the one or more objects in the environment. Radio detection and ranging (RADAR) and light detection and ranging (LIDAR) are well-known sensing modalities for detecting objects and determine their range and/or velocity (speed and direction of motion) in an environment. Radar can be used to detect objects in a larger area and low visibility weather conditions compared to LIDAR; however, LIDAR provides higher resolution and accuracy. A detection and ranging system may use one or more RADARS and/or one or more LIDARS to facilitate detection of one or more objects distributed in an environment. The signal processing functions and components employed for generation of a probing signal (e.g., used to generate radio waves or light waves) and processing the corresponding echo signal (e.g., generated by reflection of the radio waves or the light waves), to determine the location and/or velocity of an object, may involve complex, and high-power consumption digital and analog signal processing units. In many applications, object detection systems may employ multiple sensors located in different positions to detect objects (e.g., to improve the accuracy, to cover larger areas, cover different directions and the like). As such, there is a need for methods and systems that enable reducing the number, the complexity, and the electrical power demands of signal processing units employed for detection and range finding based on multiple sensors. For example, if multiple sensors are connected to a central processing unit where all complex signal processing steps associated with extracting information about the detected objects (e.g., position, velocity and the like) are performed, the size, cost, power consumption and complexity of the sensors may be reduced. Such a centralized approach may allow using more sensors and distributing in a larger area. A centralized sensor system may also benefit from making the original (“raw”) information contained in multiple echo signals received by the central processing unit and exploit them through the more advanced signal processing chains that can be implemented in the central processing unit. It should be appreciated that if the received echo signals are processed within each sensor, the level and accuracy of the information extracted from the echo signals may be limited by the computational resources available within each sensor resulting in an inefficient usage of the raw information encoded in the echo signal. Additionally, if centralized sensor systems and methods can support both RADAR and LIDAR sensors to enable detection based on both radio waves and light waves, they may enable the realization of compact multi-emitter detection systems that combine high-resolution directional detection with and long-range wide area detection and low-visibility resilience. FIG.1Aillustrates a centralized object detection sensor network system100, according to various embodiments. The centralized object detection sensor network system100may include a central signal generation and processing unit102, also referred to herein as a central unit, one or more transponders104and one or more communication links106, also referred to herein as links, that communicatively connect the one or more transponders104to the central unit102. The one or more links106can be bidirectional analog or digital communication links. Further, the one or more links106may include one or more of optical communication links, RF communication links, Ethernet communication links or other types of digital or analog communication links. In some cases, a subset of the links may be RF communication links or optical links or Ethernet links. In some embodiments, the one or more links106may comprise multiple-input and multiple-output (MIMO) RF or optical links. FIG.1Billustrates some details associated with object detection using the centralized object detection sensor network system100. In various embodiments, the central unit102generates one or more probing signals configured for detecting objects in an environment and transmits the one or more probing signals to the one or more transponders104via the one or more links106. The one or more transponders104convert the one or more probing signals to one or more free space probing waves108and direct them to the environment. The one or more free space probing waves can be free space radio waves or free space optical waves. In some embodiments, for example where radio waves are used for object detection, to generate the one or more free space probing waves, the one or more transponders104first convert (e.g., upconvert) the one or more probing signals (e.g., baseband or intermediate frequency probing signals) to one or more RF probing signals. The one or more transponders104may use the one or more RF probing signals to generate the one or more free space probing waves108(e.g., radio waves). The environment may include one or more objects110that generate one or more free space echo waves112by reflecting a portion of the one or more free space probing waves108. The transponders104may receive portions of the free space echo waves112, convert the received portions of the free space echo waves112to one or more echo signals and transmit the one or more echo signals back to the central unit102via the one or more links106. In some embodiments, for example where the echo signals are generated using radio waves, the one or more transponders104convert one or more free space echo waves (e.g., received via one or more antennas), to one or more RF echo signals. Subsequently, the one or more transponders104may use the one or more RF echo signals to generate the one or more echo signals. The central unit102receives and processes the one or more echo signals to determine positions and velocities of the one or more objects110. In the illustrated embodiments inFIGS.1A and1B, the central unit102, is advantageously physically separated from the one or more transponders104for physical separation between the computational and sensing functionalities of the sensor network system100, which provides various advantages described above. For example, the central unit102advantageously serves as a common central processing unit to compute and determine the positions and velocities of the one or more objects110detected by the one or more transponders104. In these embodiments, the one or more transponders104are not configured to provide computing functions for the determination of the positions and velocities of the one or more objects110. By arranging the central unit102to serve as a common central processing unit for the one or more transponders104, cost, reliability and complexity of the sensor network100may advantageously be reduced. In some embodiments, the probing signals can be continuous wave (CW) signals (e.g., signals that continuously vary in time domain without any sudden change of amplitude in time). CW probing signals may be used to generate CW free space probing waves and therefore CW echo waves and echo signals. In some other embodiments, the probing signals can be pulsed signals (e.g., signals that include sudden changes of amplitude in time domain). Pulsed probing signals may be used to generate pulsed free space probing waves and therefore pulsed echo waves and echo signals. Advantageously, centralized object detection sensor network systems that use CW probing signals CW detection techniques, may be able to measure the positon and velocities of objects with higher precision compared to pulsed systems. In some embodiments, the centralized object detection sensor network systems described herein may use CW probing signals and CW position and velocity measurement techniques to detect objects in an environment. In some embodiments, two or more probing signals may be multiplexed to form one or more multiplexed probing signals and two or more echo signals may be multiplexed to form one or more multiplexed echo signals. The echo signals or the probing signals can be multiplexed using various multiplexing techniques such as wavelength (or frequency) division multiplexing, time division multiplexing, polarization division multiplexing, angular momentum division multiplexing, code division multiplexing or other multiplexing methods used for multiplexing signals in optical domain, electrical domain or RF domain. Advantageously, multiplexing probing and echo signals may reduce a number of links required to connect the one or more transponders104to the central unit102without reducing the number of probing signals (and the corresponding echo signals) communicated with each transponder. In various embodiments, the probing signals and the echo signals may comprise: one more of analog signals (e.g., electronic analog signals), digital signals (electronic digital signals), Ethernet signals, analog optical signals or digital optical signals. In various embodiments, the probing signals (e.g., electronic signals) may be baseband signals, intermediate frequency (IF) signals, or radio frequency (RF) signals. An IF signal can be an intermediate frequency (IF) carrier whose amplitude or phase is modulated by one or more baseband signals. An RF signal can be a radio frequency (RF) carrier whose amplitude or phase is modulated by one or more baseband signals or one or more IF signals. In various embodiments, the echo signals may be reflected baseband signals, or reflected baseband signals carried by one or more IF or RF carrier signals. For example, phases or amplitudes of the one or more RF or IF carrier signals may be modulated by the reflected baseband signals. In some examples, phases or amplitudes of the one or more RF carrier signals may be modulated an IF signals that carry the reflected baseband signals. RF carrier signals can be single tone harmonic signals generated by an oscillator (e.g., a local oscillator) and having frequencies between 1-5 GHz, 5-10 GHz, 10-20 GHz, 20-30 GHz, 30-40 GHz, 40-50 GHz, 50-60 GHz, 60-70 GHz, 70-80 GHz, 80-90 GHz, 90-100 GHz, 100 GHz-200 GHz, 200-300 GHz. IF carrier signals can be single tone harmonic signals generated by an oscillator (e.g., a local oscillator) and having frequencies between 10 MHz to 10 GHz. In various embodiments, probing signals and echo signals can be optical signals (referred to as optical probing signals and optical echo signals). Optical signals can be digital optical signals or analog optical signals. Analog optical signals may comprise one or more optical carrier signals whose amplitude, phase, polarization or angular momentum modulated with one or more baseband signals, IF signals or RF signals. Digital optical signals may comprise one or more optical carrier signals whose amplitude, phase, polarization or angular momentum are modulated with one or more digital baseband signals. An optical carrier can be a single tone harmonic light wave (e.g., a laser light) having a frequency (or wavelength) in the visible, near-infrared, or mid-infrared range. In some examples, the frequency of the optical carrier can be between 100 THz to 850 THz (corresponding to a wavelength between 352 nm to 2997 nm). In some examples, the probing signals may be radar signals. In some examples, some of the radar signals may comprise an RF carrier whose amplitude or phase is modulated by one or more baseband signals. In some other examples, at least some of the probing signals may comprise baseband signals. In yet other examples, at least some of the radar signals may comprise an RF carrier whose frequency is shifted by an intermediate frequency (IF) signal whose phase or amplitude is modulated by one or more baseband signals. Each baseband signal may be a digital signal or an analog signal. The frequency of the RF signal may range from few to few hundred gigahertz (e.g., between 1-5 GHz, 5-10 GHz, 10-20 GHz, 20-30 GHz, 30-40 GHz, 40-50 GHz, 50-60 GHz, 60-70 GHz, 70-80 GHz, 80-90 GHz, 90-100 GHz, 100 GHz-200 GHz, 200-300 GHz). The frequency of the baseband signal may range from 1 MHz to 10 GHz and the frequency of the IF signal may range from 10 MHz to few 10 GHz. In various embodiments, a probing signal can be a digital electronic signal or a digital optical signal. A digital electronic signal can be a digitized baseband signal, a digitized IF signal or a digitized RF signal. A digital optical signal can be an optical carrier signal modulated by a digital electronic signal. In various embodiments, an electronic-to-optical (E/O) converter electronic signal may convert an electronic signal to an optical signal by directly modulating an amplitude or a phase of a laser. Alternatively, an electronic-to-optical (E/O) convert can convert an electronic signal to an optical signal by externally modulating an amplitude or a phase of a laser using an optical amplitude or phase modulator. In some embodiments, the probing signals and echo signals associated with a transponder may be communicated with the central unit102, via a single bidirectional link (e.g., using one or more routers in the central unit and in the transponder). In some other embodiments, two separate links may be used for sending probing signals from the central unit102to a transponder and sending echo signals from the transponder to the central unit102. Still referring toFIGS.1A and1B, the central unit102may comprise several subsystems including but not limited to: a signal processing unit (e.g., a digital signal processing unit), a transmitter unit, a receiver unit, and a router unit. The receiver unit may include one or more receivers and the transmitter unit may include one or more transmitters. The signal processing unit may generate one or more baseband signals and the transmitter unit may convert the one or more baseband signals to one or more probing signals (e.g., by digital-to-analog conversion, up-conversion to higher RF frequencies, and/or amplification). The receiver unit may receive one or more echo signals corresponding to the one or more probing signals and provide one or more reflected baseband signals to the signal processing unit. The signal processing unit may process the one or more reflected baseband signals to determine positions and or velocities of one or more objects in the environment. Processing the one or more reflected baseband signals may include a comparison between the one or more baseband signals and the one or more reflected baseband signals. In various embodiments, the baseband signals and the reflected baseband signals can be analog or digital electronic signals. In some embodiments, the probing signals, multiplexed probing signals, echo signals and multiplexed echo signals can be optical signals. In such embodiments, the central unit may include one or more electrical-to-optical converters and one or more optical-to-electrical converters. In some examples, the one or more electrical-to-optical converters may include one or more lasers to convert electronic probing signals to optical probing signals. In some cases, the one or more lasers may be directly modulated while in other cases one or more optical modulators may be used to modulate optical outputs of the one or more lasers. In some examples, one or more optical-to-electrical converters may include one or more photodetectors configured to convert optical echo signals to electronic echo signals. Still referring toFIGS.1A and1B, in some embodiments, the central unit102may include a multiplexing unit and a demultiplexing unit. The multiplexing unit may include one or more optical and/or electronic multiplexers configured to multiplex two or more probing signals to generate one or multiplexed probing signals. The demultiplexing unit may include one or more optical or electronic demultiplexers configured to generate two or more echo signals by demultiplexing one or more multiplexed echo signal. The router unit may be configured to couple probing signals from the transmitter unit or the multiplexing unit to the one or more links106that communicatively connect the central unit102to one or more transponders104and couple the echo signals received via the one or more links106to the receiver unit. Advantageously, the router unit allows using a single link to send a probing signal or a multiplexed probing signal and simultaneously receive an echo signal or a multiplexed echo signal. In some embodiments, each transmitter unit may include one or more digital to analog (D-to-A) converters to convert digital input baseband signals generated by signal processing module to analog input baseband signals. In some embodiments, each receiver may include one or more analog to digital (A-to-D) converters to convert analog output baseband signals to digital output baseband signals. In some embodiments, each transmitter may include components configured to mix RF carrier signals with baseband signals (up-conversion) and each receiver may include components configured to extract baseband signals from modulated RF carrier signals. The signal processing unit may comprise a digital signal processing module configured to receive and process digital signals (e.g., digital output baseband signals) to provide information pertaining to positions and/or velocities of one or more objects in the environment. The digital signal processing unit may comprise a memory configured to store digital data and machine-readable instructions, a processor configured to process the digital data or one or more digital signals and generate an output signal by executing the machine-readable instructions, and an output interface configured to output the one or more output signals. In some embodiments, the signal processing unit may include one or more digital to analog (D-to-A) converters to convert digital input baseband signals generated by the digital signal processing module to analog input baseband signals and one or more analog to digital (A-to-D) converters to convert analog output baseband signals generated by the one or more receivers to digital output baseband signals. A transponder of the one or more transponders104may comprise several components, subsystems or units including but not limited to an antenna unit. The antenna unit may comprise one or more antennas configured to convert one or more probing signals to one or more free space probing waves and convert one or more free space echo waves, associated with the free space probing waves, to one or more echo signals. In some cases, the antennas of the antenna unit may be configured as multiple input multiple output (MIMO) antennas. Each antenna used in a transponder can be an RF antenna configured to convert RF signals (e.g., RF probing signals) to free space radio waves or an optical antenna configured to convert optical signals (e.g., guided optical waves) to free space light waves. In some embodiments, one or more of the transponders104may include one or more phased array antennas each capable of directing the free space waves (radio wave or light waves) to a propagation direction. Each of the one or more phased array antennas may include a plurality of antennas and the propagation direction may be determined by a phase relation among the free space waves emitted by the plurality of antennas. In some cases, the phase relation and therefore the propagation direction may be determined by one or more probing signals received by the transponder. The one or more phased array antennas can be optical phased array antennas configured to control the propagation direction of light waves or RF phased array antennas configured to control the propagation direction of radio waves. In some embodiments, one or more of the transponders104may include components for mixing RF carrier signals with baseband signals (up-conversion) and/or extracting basebands signals from modulated RF carrier signals. In some embodiments, the transponder may include subsystems for demultiplexing one or more probing signals from a multiplexed probing signal received from the central unit and generating a multiplexed echo signal comprising one or more echo signals. In some cases, the probing signals, multiplexed probing signals, echo signals and multiplexed echo signals can be optical signals. In such embodiments, the transponder may include one or more components for electrical-to-optical and optical-to-electrical conversion. For example, the transponder may include one or more photodetectors to convert optical probing signals to electronic probing signals, and one or more lasers to convert electronic echo signals to optical echo signals. In some cases, the one or more lasers may be directly modulated while in other cases one or more optical modulators may be used to modulate optical outputs of the one or more lasers. Still referring toFIGS.1A and1B, in some embodiments, one or more links106may comprise one or more coaxial cables configured for transmitting RF probing signals (e.g., multiplexed RF probing signals) and RF echo signals (multiplexed RF echo signals), or one or more optical fibers for transmitting optical probing signals (e.g., multiplexed optical probing signals) and optical echo signals (e.g., multiplexed optical echo signals). In some cases, one or more coaxial waveguides may be bundled as a single cable. In some cases, the one or more optical fibers may be bundled as a single cable. In some cases, some of the one or more optical fibers can be single mode optical fibers, multi-mode optical fibers or multicore optical fibers. In some cases, some of the one or more optical fibers can be polarization maintaining optical fibers or other types of specialized optical fibers (e.g., having a special cross-sectional profile, special material composition, special dispersive properties, and the like). In some embodiments, the one or more links106may comprise a multiple input-multiple output (MIMO) link between the central unit and the one or more transponders. In some embodiments, the centralized object detection sensor network system may be a hybrid centralized object detection sensor network system configured to detect one or more objects and determine their location and position, using a combination of signals received from one or more transponders and one or more other sensors (e.g., image sensors, kinematic sensors, position sensors, LIDAR sensors, acoustic sensors, and the like). In some embodiments, as opposed to transponders, the sensor may not use a free space probing waves and a comparison between the probing wave and a corresponding echo wave to detect objects.FIG.2Aillustrates a hybrid centralized object detection sensor network system200comprising one or more transponders204that are communicatively connected to the central unit200via one or more communication links106and one or more sensors205that are communicatively connected to the central unit200via one or more communication links207. In some embodiments the communication links106may be different type of communication links compared to the communication links207. The one or more sensors205may generate one or more sensor signals usable for determining position, velocity or other characteristics of the one or more objects (e.g., shape, color, temperature, and the like). As illustrated inFIG.2B, in some embodiments, a centralized object detection sensor network system201may include one or more end nodes205configured to demultiplex one or more multiplexed probing signal received from the central unit203via one or more node links (primary communication links)209and multiplexing all echo signals received from one or more transponders (e.g., a first group of transponders104a), via one or more secondary links106a, so that they can be transmitted to the central unit203using the one or more node links209. Each of the one or more end nodes205may be configured to distribute probing signals among transponders in a group of transponders and receive echo signals from the group of transponders. Number of transponders included in a group of transponders may be the same or different from the number of transponders in another group of transponders. For example, as shown inFIG.2B, the end node205amay be connected to a group of transponders104acomprising N1transponders via one or more secondary links106a, and the end node205bmay be connected to another group of transponders104bcomprising N2transponders via one or more secondary links106b, where N1is not equal to N2. Each node link of the one or more node links209may include one or more of an Ethernet, RF or optical communication link. The use of the end nodes205in the centralized object detection sensor network system201reduces the cost, complexity, and maintenance of the system, in particular when the distance between transponders and the central unit203is long (e.g., in the order of one kilometer or more). The disclosed centralized object detection sensor network systems and the corresponding system architectures described above may provide several advantages, including, but not limited to: Distributed sensors and transponders. Relatively more complex, vulnerable and expensive parts of the system (e.g., transmitter, receiver, and signal processor) may be physically decoupled from the transponders and sensors and housed at a remote location (e.g., a base station). Such configuration allows distributing a large number of sensors and transponders over a large area and adjusting their position for optimal detection and sensing with greater efficiency and lower cost. The transponders may be in communication with the central unit by wire, waveguide, optical fibers or other types of connections. In the case of optical fibers, the distance between the transponders and central unit may span from a few centimeters to kilometers. Advanced raw signal processing methods, advanced and complex signal processing methods (e.g., advanced digital signal processing, machine learning techniques and the like) may not be implemented in sensors and transponders that may be installed large numbers in a wide area and therefore are limited by various constraints (e.g., size, cost, weight, environmental condition, and the like). A sensor network system may still benefit from the above mentioned signal processing methods if they are preformed by a central unit, shared among the sensors and transponders, in which complexity, cost, and environmental constrains may managed more efficiently. Such centralized sensor network may be implemented by capturing the original information contained in the echo signals (e.g., raw signals) at the sensors or transponders and transmitting them unaltered to the central unit. Advantageously, a centralized sensor network system, may also exploit the synchronous processing of raw echo signals received from a plurality of sensors or transponders, to extract certain information about the detected objects, which may not be extracted from individually processed echo signals. Reduced System Complexity. A centralized object detection sensor network system with shared transmitter, receiver and digital signal processing may reduce the total number of components and the overall hardware and software system complexity. Radar-Lidar Multiplexing. Transponders may be configured to detect objects using both free space light waves and free space radio waves. Correspondingly, the central unit and the communication links (e.g., optical links) may be shared between lidar and radar systems, thereby further reducing system complexity in applications requiring both sensing modalities (e.g., autonomous vehicles). Concurrent Sensing. Common signal processing between lidar, radar or other sensing modalities (e.g. cameras, kinematic sensors, position sensors, acoustic sensors) may enhance detection and ranging performance. Exchanging early heuristics through a common signal processing platform may provide mutual reinforcement and validation among the different sensing modalities. FIG.3Aillustrates an example centralized object detection sensor network system300ahaving a central unit301coupled to a plurality of transponders104through a plurality of communication links106, where the central unit301transmits one probing signal to each of the plurality of transponders104, and receives one echo signal from each transponder via a communication link. In this example, the central unit301includes a signal processing unit320that generates a plurality of baseband signals333and transmits them to a transmitter unit321. In some examples, the number of baseband signals in the plurality of baseband signals may be equal to the number of transponders in the centralized object detection sensor network system300a. The plurality of baseband signals can be analog or digital signals. The transmitter unit321may use the plurality of baseband signals333to generate a plurality of probing signals337. The number of probing signals may be equal to the number of baseband signals. In some examples, the transmitter unit321may be configured to upconvert the plurality of baseband signals333using one or more RF carrier signals or one or more IF carrier signals and amplify the resulting upconverted baseband signals to generate the plurality of probing signals337. In some cases, each of the baseband signals333may be upconverted using a different RF carrier signal or different IF carrier signal (e.g., having a different frequency). In some other examples, the transmitter unit321may generate the plurality of probing signals337only by amplifying the plurality of baseband signals333and/or adjusting relative phase differences associated with the plurality of baseband signals333. In some embodiments, the plurality of baseband signals333can be a plurality of digital signals, and the transmitter unit321may be configured to convert them to a plurality of analog signals (e.g., using an digital-to-analog converter) before up-converting and/or amplifying them. The plurality of probing signals337may be received by a router unit331that couples the plurality of the probing signals337to the plurality of links106. Each of the plurality of links106may deliver a probing signal to a transponder of the plurality of the transponders104. The transponder may convert the probing signal to one or more free space probing waves directed to an environment. In some examples, the transponder may upconvert the probing signal to an RF probing signal using an RF carrier signal (e.g., generated by a local oscillator in the transponder) and use the resulting RF probing signal to generate the one or more free space probing waves. One or more objects in the environment may generate one or more echo waves by reflecting the one or more free space probing waves. The transponder may receive a portion of the one or more echo waves, generate an echo signal and transmit the echo signal to the central unit301via the same link from which the corresponding probing signal was received. In some examples, the transponder may downconvert an RF echo signal received from an antenna unit, using the RF carrier signal, to generate the echo signal. A plurality of echo signals339may be transmitted from the plurality of transponders104to the central unit301via the plurality of links106where each link transmits the echo signal generated by one of the transponders104. The router331may couple the plurality of the echo signals339received from the plurality of transponders104to the receiver unit323. The receiver unit323may generate a plurality of reflected baseband signals335and send them to the signal processing unit320. The signal processing unit320(e.g., a digital signal processing unit) may determine velocities and/or locations of the one or more objects based at least in part on the plurality of reflected baseband signals335. In some examples, the signal processing unit320may generate an object signal based at least in part on the plurality of reflected baseband signals335. The object signal may be usable for determining the velocities and the positions of the one or more objects. In some cases, the object signal may comprise one or more digital signals, and it can be an optical or an electronic signal. In some examples, the signal processing unit320may use the plurality of reflected baseband signals335and the plurality of the baseband signals333to determine velocities and/or locations of the one or more objects or generate the object signal. In some examples, the receiver unit323may be configured to downconvert the plurality of reflected baseband signals335from the plurality of echo signals339using the one or more RF carrier signals or one or more IF carrier signals. The one or more RF or IF carrier signals may be generated by one or more local oscillators in the central unit301. In some other examples, the receiver323may generate the plurality of reflected baseband signals335only by amplifying the plurality of echo signals339. In some examples, the reflected baseband signals335can be digital signals digitized by the receiver unit323. In some embodiments, the plurality of links106may be optical links, and the plurality of probing signals337and the plurality of echo signals339may be optical signals (herein referred to as plurality of: optical probing signals and optical echo signals). In these embodiments, transmitter unit321may convert the plurality of baseband signals333(electronic signals) to a plurality of optical baseband signals using one or more electric-to-optical (E/O) converters. In some cases, the transmitter unit321may first upconvert the plurality of baseband signals using the one or more RF or IF carrier signals and convert the resulting RF or IF signals to the plurality of optical probing signals337. The E/O converters may generate the plurality of optical signals by modulating amplitude or phase of one or more optical carrier signals using the plurality of baseband signals, the IF signals or the RF signals. Further, the receiver unit323may include one or more optical-to-electrical (O/E) converters to generate the plurality of reflected baseband signals335using the plurality optical echo signals339. The O/E converters may generate the plurality of reflected baseband signals335using one or more photodetectors. FIG.3Billustrates a centralized object detection sensor network system300bhaving a central unit302coupled to a plurality of transponders104through a plurality of communication links106, where the central unit302is configured to transmit a plurality of multiplexed probing signals342ato the plurality of transponders104and to receive a plurality of multiplexed echo signals344from the plurality of transponders104. In some embodiments, a multiplexed probing signal and a multiplexed echo signal associated with a transponder of the plurality of transponders104, may be communicated via a single link. In certain embodiments, the centralized object detection sensor network system300bcan include one or more features, components or functionalities previously described with respect to the centralized object detection sensor network system300a(FIG.3A), the details of which may be omitted herein for brevity. In the example shown, the central unit302includes a signal processing unit319that generates a plurality of baseband signals334acomprising a plurality of baseband groups. In some examples, number of baseband groups may be equal to the number of transponders in the centralized object detection sensor network system300b. Each of the plurality of baseband groups may comprise one or more baseband signals. The signal processing unit319transmits the plurality of the baseband signals334ato a transmitter unit322. The transmitter unit322may use the plurality of baseband signals334ato generate a plurality of probing signals338acomprising a plurality of probing signal groups associated with the plurality of baseband signal groups. Each of the probing signal groups338amay be associated with a transponder of the plurality of transponders104. Number of probing signal groups may be equal to the number of baseband groups. In some examples, the transmitter unit322may be configured to upconvert the plurality of baseband signals334ausing one or more RF carrier signals or one or more IF carrier signals and amplify the resulting upconverted baseband signals to generate the plurality of probing signals338a. In some cases, each of the baseband signals334amay be upconverted using a different RF carrier signal or a different IF carrier signal (e.g., having a different frequency). In some other examples, the transmitter unit322may generate the plurality of probing signals338aonly by amplifying amplitudes and or adjusting relative phase differences among the plurality of baseband signals334. In some embodiments, the plurality of baseband signals334acan be a plurality of digital signals and the transmitter may be configured to convert them to a plurality of analog signals (e.g., using one or more digital-to-analog converters) before upconverting and/or amplifying them. The transmitter unit322sends the plurality of probing signals338a, comprising the plurality of probing signal groups, to an intra-transponder multiplexer326unit. The intra-transponder multiplexer326generates a plurality of multiplexed probing signals342awhere each multiplexed probing signal is associated with one of the plurality of probing signal groups. As such, the number of multiplexed probing signal groups maybe equal to the number of transponders in plurality of transponders104. The intra-transponder multiplexer326may generate each multiplexed probing signal using the one or more probing signals in the corresponding probing signal group and based on a signal multiplexing method (e.g., an electronic or an optical signal multiplexing method). A router332(e.g., an optical router or and electronic router) may receive the plurality of multiplexed probing signals342aand couple them to the plurality of links106links where each multiplexed probing signal is transmitted to a transponder via a link. The transponder may include a demultiplexer unit that receives the multiplexed probing signal, generates the one or more probing signals associated with a probing signal group and use them to generate one or more free space probing waves directed to an environment. In some examples, the transponder may upconvert the one or more probing signals to one or more RF probing signals using one or more RF carrier signals (e.g., generated by one or more ocal oscillators) and use the resulting RF probing signals to generate the one or more free space probing waves. One or more objects in the environment may generate one or more echo waves by reflecting the one or more probing waves. The transponder may receive a portion of the one or more echo waves, and generate one or more echo signals. In some cases, the one or more echo signals may be generated by downconverting one or more RF echo signals received from an antenna unit (e.g., using one or more IF carrier signals or one or more RF carrier signals) and the one or more echo signal may comprise one or more IF echo signals or one or more reflected baseband signals. The transponder may include a multiplexer unit configured to generate a multiplexed echo signal using the one or more echo signals. In some examples, the multiplexed echo signal may comprise an echo signal group. The transponder may transmit the multiplexed echo signal to the central unit302via the same link from which the corresponding multiplexed probing signal was received. A plurality of multiplexed echo signals may be transmitted from the plurality of transponders104to the central unit302via the plurality of links106where each link transmits the multiplexed echo signal generated by one of the transponders. The router332may couple the plurality of the multiplexed echo signals344received from the plurality of transponders104to an intra-transponder de-multiplexer unit330configured to generate a plurality of echo signals340comprising a plurality of echo signal groups associated with the plurality of probing signal groups. The plurality of echo signals340may be received by a receiver unit324. The receiver unit324may generate a plurality of reflected baseband signals336and send them to the signal processing unit319. The signal processing unit319may determine velocities and/or locations of the one or more objects based at least in part on the plurality of reflected baseband signals336. In some examples, the signal processing unit319may generate an object signal based at least in part on the plurality of reflected baseband signals336. The object signal may be usable for determining the velocities and the positions of the one or more objects. In some cases, the object signal may comprise one or more digital signals, and it can be an optical or an electronic signal. In some examples, the signal processing unit319may use the plurality of reflected baseband signals336and the plurality of the baseband signals334ato determine velocities and/or locations of the one or more objects or generate the object signal. In some examples, the receiver unit324may be configured to downconvert the plurality of reflected baseband signals336from the plurality of echo signals340using the one or more carrier signals (e.g., RF carrier signals or IF carrier signals). In some other examples, the receiver324may generate the plurality of reflected baseband signals336only by amplifying the plurality of echo signals340and/or adjusting relative phase differences among the plurality of reflected baseband signals. In some embodiments, the plurality of links106(in the system300b) may be optical links, and the plurality of the probing signals338a, the plurality of multiplexed probing signals342a, the plurality of echo signals344and the plurality of multiplexed echo signals340may be optical signals (herein referred to as plurality of: optical probing signals, optical multiplexed probing signals, optical echo signals and optical multiplexed echo signals). In these embodiments, transmitter unit322may convert the plurality of baseband signals334ato a plurality of optical probing signals using one or more electric-to-optical (E/O) converters. In some cases, the transmitter unit322may first upconvert the plurality of baseband signals using the one or more RF or IF carrier signals and convert the resulting RF or IF signals to an optical probing signals. The E/O converters may generate the plurality of optical probing signals by modulating amplitude or phase of one or more optical carrier signals using the baseband, IF or RF signals. The receiver unit322may include one or more optical-to-electrical (O/E) converters to generate the plurality of reflected baseband signals336. The O/E converters may generate the plurality of reflected baseband signals336using one or more photodetectors. In these embodiments, the intra-transponder multiplexer326, the intra-transponder demultiplexer330, the multiplexers unit in each transponder and the demultiplexer unit in each transponder, may comprise optical multiplexers and optical demultiplexers, configured to multiplex and demultiplex optical signals using optical multiplexing/demultiplexing methods (e.g., wavelength division multiplexing, time division multiplexing and the like). The optical intra-transponder multiplexer326may generate a plurality of optical multiplexed probing signals342ausing the plurality of optical probing signals338a. The optical intra-transponder demultiplexer330may generate a plurality of optical echo signals340using the plurality of optical multiplexed echo signals344. Further in these embodiments each transponder may include one or more O/E converters to convert the plurality of optical probing signals (generated one or more optical demultiplexers) to electronic signals, and one or more O/E converters to generate the plurality of optical echo signals. FIG.3Cillustrates a centralized object detection sensor network system300chaving a central unit303coupled to a plurality of transponders104through a plurality of communication links106, where the central unit303distributes a plurality of multiplexed probing signals342bamong the plurality of transponders104and receives a plurality of multiplexed echo signals344from the plurality of transponders104. In certain embodiments, the centralized object detection sensor network system300ccan include one or more units, functionalities or previously described with respect to the centralized object detection sensor network system300a(FIG.3A) or300b(FIG.3B) described above, the details of which may be omitted herein for brevity. In the example shown (FIG.3C), the central unit303includes a signal processing unit318that generates a plurality of baseband signals334band transmits them to a transmitter unit322b. The transmitter unit322bmay use the plurality of baseband signals334bto generate a plurality of probing signals338b. In some examples, the transmitter unit322bmay be configured to upconvert the plurality of baseband signals334busing one or more RF or IF carrier signals and amplify the resulting one or more RF or IF signals (upconverted baseband signals) to generate the plurality of probing signals338b. In some cases, each baseband signal may be upconverted using a different RF or IF carrier signal (e.g., having a different frequency). In some other examples, the transmitter unit322bmay generate the plurality of probing signals338bonly by amplifying amplitudes and or adjusting relative phase differences of the plurality of baseband signals334b. In some embodiments, the plurality of baseband signals334bcan be a plurality of digital signals and the transmitter unit323bmay be configured to convert them to a plurality of analog signals before upconverting and/or amplifying them. The transmitter unit322bsends the plurality of probing signals338bto an intra-transponder multiplexer unit326athat generates a multiplexed probing signal based on a signal multiplexing method (e.g., an electronic or an optical signal multiplexing method). Next, an inter-transponder distributor326b(e.g., an electronic or optical inter-transponder distributor) generates a plurality of probing signals342bcomprising one or more copies of the multiplexed probing signal generated by the intra-transponder multiplexer unit326a. In some examples, the inter-transponder distributor326bmay be an electronic or optical coupler with an input port and a plurality of output ports where portions (e.g., equal portions) of a signal received by the input port are output from each output port. In some such examples, number the plurality of output ports may equal to the number of transponders in the centralized object detection sensor network system300c. Still referring toFIG.3C, a router333(e.g., an optical router or and electronic router) may receive the plurality of multiplexed probing signals342band couple them to the plurality of links106links where each multiplexed probing signal is transmitted to a transponder (e.g., transponder1, transponder2, . . . or transponder N) via a link. The transponder may include a demultiplexer unit that receives the multiplexed probing signal, generates the plurality of probing signals338band uses the plurality of probing signals338bto generate one or more free space probing waves directed to an environment. In some examples, where the plurality of probing signals338bcomprises the plurality of baseband signals334bor IF signals, the transponder may upconvert the one or more probing signals using an RF carrier signal and use the resulting RF probing signal to generate the one or more free space probing waves. One or more objects in the environment may generate one or more echo waves by reflecting the one or more probing waves. The transponder may receive a portion of the one or more echo waves, and to generate one or more echo signals. The transponder may include a multiplexer unit configured to generate a multiplexed echo signal using the one or more echo signals. The transponder may transmit the multiplexed echo signal to the central unit303via the same link from which the corresponding multiplexed probing signal was received. In some examples, the transponder may downconvert one or more RF echo signals received from an antenna unit, using one or more RF carriers, to generate the one or more echo signals. In these examples, each echo signal, may be comprise a reflected baseband signal or an IF carrier modulated by the reflected baseband signal A plurality of multiplexed echo signals may be transmitted from the plurality of transponders104to the central unit303via the plurality of links106where each link transmits the multiplexed echo signal generated by one of the transponders. The router333may couple the plurality of the multiplexed echo signals344breceived from the plurality of transponders104to an intra-transponder de-multiplexer unit330bconfigured to generate a plurality of echo signals340bcomprising a plurality of echo signal groups associated with the plurality of transponders104. The plurality of echo signals340bmay be received by a receiver unit325. The receiver unit325may generate a plurality of reflected baseband signals336band send them to the signal processing unit318. The signal processing unit318may determine velocities and/or locations of the one or more objects based at least in part he plurality of reflected baseband signals336b. In some examples, the signal processing unit318may generate an object signal based at least in part on the plurality of reflected baseband signals336b. The object signal may be usable for determining the velocities and the positions of the one or more objects. In some cases, the object signal may comprise one or more digital signals, and it can be an optical or an electronic signal. In some examples, the signal processing unit318may use the plurality of reflected baseband signals336band the plurality of the baseband signals334bto determine velocities and/or locations of the one or more objects or generate the object signal. In some examples, the receiver unit325may be configured to downconvert the plurality of reflected baseband signals336bfrom the plurality of echo signals340busing the one or more RF carrier signals and/or using the one or more IF signals. In some other examples, where the plurality of echo signals344bcomprise a plurality of baseband signals, the receiver325may generate the plurality of reflected baseband signals336bonly by amplifying the plurality of echo signals340. In some examples, the reflected baseband signals336bcan be digital signals digitized by the receiver unit325. In some embodiments, the plurality of links106may be optical links, and the plurality of the probing signals338b, the plurality of multiplexed probing signals342b, the plurality of echo signals340band the plurality of multiplexed echo signals344bmay be optical signals (herein referred to as plurality of: optical probing signals, optical multiplexed probing signals, optical echo signals and optical multiplexed echo signals). In these embodiments, transmitter unit323may convert the plurality of baseband signals334bto a plurality of optical probing signals using one or more electric-to-optical (E/O) converters. In some cases, the transmitter unit323may first upconvert the plurality of baseband signals using the one or more RF or IF carrier signals and convert the resulting RF or IF signals to an optical probing signals. The E/O converters may generate the plurality of optical probing signals by modulating amplitude or phase of one or more optical carrier signals using the baseband, IF or RF signals. The receiver unit325may include one or more optical-to-electrical (O/E) converters to generate the plurality of reflected baseband signals336b. The O/E converters may generate the plurality of reflected baseband signals336busing one or more photodetectors. In these embodiments, the intra-transponder multiplexer326b, the intra-transponder demultiplexer330b, the multiplexers unit in each transponder and the demultiplexer unit in each transponder, may comprise optical multiplexers and optical demultiplexers, configured to multiplex and demultiplex optical signals using optical multiplexing/demultiplexing methods (e.g., wavelength division multiplexing, time division multiplexing and the like). The optical intra-transponder multiplexer326bmay generate a plurality of optical multiplexed probing signals342busing the plurality of optical probing signals338b. The optical intra-transponder demultiplexer330bmay generate a plurality of optical echo signals340busing the plurality of optical multiplexed echo signals344b. Further in these embodiments each transponder may include one or more O/E converters to convert the plurality of optical probing signals (generated one or more optical demultiplexers) to electronic signals, and one or more O/E converters to generate the plurality of optical echo signals. FIG.4illustrates a centralized object detection sensor network system400having a central unit403coupled to a plurality of end nodes through a plurality of node links209where each end node transmits a plurality of probing signals to each transponder and receives a plurality of echo signals from each transponder. In certain embodiments, the centralized object detection sensor network system400can include one or more features of some of the embodiments previously described with respect to the centralized object detection sensor network system300a(FIG.3A),300b(FIG.3B) and/or300c(FIG.3C) described above, the details of which may be omitted herein for brevity. In the illustrated example, the central unit403includes a signal processing unit319that generates a plurality of baseband signals334and transmits them to a transmitter unit322. The transmitter unit322may use the plurality of baseband signals334to generate a plurality of probing signals338. In some examples, the transmitter unit323may be configured to upconvert the plurality of baseband signals334using one or more RF carrier signals and amplify the resulting upconverted baseband signals to generate the plurality of probing signals338. In some cases, each baseband signal may be upconverted using a different RF carrier signal (e.g., having a different frequency). In some other examples, the transmitter unit322may generate the plurality of probing signals338only by amplifying amplitudes and or adjusting relative phase differences of the plurality of baseband signals334. In yet other examples, the transmitter unit322may be configured to upconvert the plurality of baseband signals334using one or more IF carrier signals and amplify the resulting upconverted baseband signals to generate the plurality of probing signals338. In some examples, each baseband signal may be upconverted using a different IF signal (e.g., having a different frequency). In some embodiments, the plurality of baseband signals334can be a plurality of digital signals and the transmitter may be configured to convert them to a plurality of analog-signals before up-converting and/or amplifying them. The transmitter unit322sends the plurality of probing signals338to an intra-transponder multiplexer unit326that generates a plurality of multiplexed probing signals342. Each multiplexed probing signal may comprise a group of probing signals associated with one of the transponders of the plurality of transponders. The number of multiplexed probing signals in the plurality of multiplexed probing signals may be equal to the number of transponders in the centralized object detection sensor network system400. Next, an inter-transponder multiplexer446agenerates a plurality of node signals454where each node signal is a multiplexed signal comprising one or more multiplexed probing signals associated with the transponders fed by the corresponding node. The plurality of multiplexed probing signals342and the plurality of node signals454may be generated using optical or electronic multiplexing methods. A router450(e.g., an optical router or and electronic router) may receive the plurality of node signals454and couple them to the plurality of node links209where each node signal is transmitted to an end node of the plurality of end nodes via a link. Each end node may be configured to: distribute the plurality of probing signals associated with a received node signal among a plurality of transponders, receive a plurality of echo signals from the plurality of transponders and generate an echo node signal that is transmitted back to the router450(in the central unit403), for example, via the link (used to deliver the node signal to the end node). In some embodiments, each end node may include, one or more of routers, an inter-transponder demultiplexer, and an inter-transponder multiplexer. As an example, a first end node405amay receive a first node signal from a first node link209aand distribute it among a first plurality of transponders404(for example N transponders). The first node signal may be received by the router452that couples the end node signal454ato the inter-transponder demultiplexer448b. The inter-transponder demultiplexer448bgenerates a plurality of multiplexed probing signals342aand sends them to the router332that transmits to the first plurality of transponders404via a first plurality of links406. The first plurality of transponders404may generate a first plurality of multiplexed echo signals and transmit them to the first end node405avia the first plurality of links406(e.g., based on the process described with respect toFIG.3BorFIG.3C). The router332may couple the plurality of the multiplexed echo signals344received from the plurality of transponders404to the inter-transponder multiplexer446bthat generates an echo node signal455a(by multiplexing multiplexed echo signals received from the plurality of transponders404). The router452couples the echo node signal455ato the first node link209a. Still referring toFIG.4, a plurality of echo node signals456may be received by the router450that couples them to the inter-transponder demultiplexer448a. The inter-transponder demultiplexer448agenerates a plurality of multiplexed echo signals344and send them to the intra-transponder demultiplexer330. The intra-transponder demultiplexer330generates a plurality of echo signals340. The plurality of echo signals340may be received by a receiver unit324that generates a plurality of reflected baseband signals336and sends them to the signal processing unit319. The signal processing unit319may use the plurality of reflected baseband signals336to determine velocities and/or locations of the one or more objects. In some embodiments, the signal processing unit319may generate one or more object signals usable for determining velocities and locations of the one or more objects. In some examples, the receiver unit324may be configured to downconvert the plurality of reflected baseband signals336from the plurality of echo signals340using the one or more RF carrier signals and/or using one or more IF signals. In some other examples, where the plurality of echo signals344comprise a plurality of baseband signals, the receiver unit324may generate the plurality of reflected baseband signals336only by amplifying the plurality of echo signals340. In some examples, the reflected baseband signals336can be digital signals digitized by the receiver unit324. In some embodiments, the plurality of links node links209and links406may be optical links, and the plurality of the probing signals338, the plurality of multiplexed probing signals342, the plurality of node signals454, the plurality of echo node signals456, the plurality of echo signals340and the plurality of multiplexed echo signals344may be optical signals. In these embodiments, transmitter unit324may convert the plurality of baseband signals334to a plurality of optical baseband signals or to a plurality of optical upconverted baseband signals (e.g., by modulating amplitude or phase of one or more optical carriers signals using the baseband signals or the upconverted baseband signals). Upconverted baseband signals can be baseband signals upconverted using an RF carrier or an IF signal. Further, the receiver unit324may convert the plurality optical echo signals340to the plurality of reflected baseband signals336. In these embodiments the intra-transponder multiplexer326, the inter-transponder multiplexer446a, the inter-transponder demultiplexer448b, the inter-transponder multiplexer446bcan be optical multiplexers and optical demultiplexer units that function based on optical multiplexing and demultiplexing methods (e.g., wavelength division multiplexing, time division multiplexing and the like). FIG.5illustrates an example transponder504of a centralized object detection sensor network system configured to receive/transmit probing/echo signals from/to a central unit of the centralized object detection sensor system. The transponder504may be, e.g., a radar transponder configured: to receive on or more probing signals or a multiplexed probing signal, convert them to free space probing waves, receive free space echo waves and convert them to a multiplexed echo signal or one or more echo signals. In the illustrated example, the transponder504may comprise a router550configured to couple a multiplexed probing signal570from a link506to an intra-transponder multiplexer unit552that generates a plurality of probing signals572. The router550can be and electronic router configured for coupling electronic signals from a link to a device or an optical router configured for coupling optical signals from a link to a device. In some examples, the multiplexed probing signal570and the plurality of probing signals572may be optical signals, and the intra-transponder multiplexer unit552may be an optical demultiplexer. In these examples, an optical-to-electrical (O/E) converter554may convert the plurality of optical probing signals to a plurality of electronic probing signals574. In some other examples, the plurality of probing signals572may be electronic probing signals in which case they may be directly transmitted to an upconverter unit556or an antenna unit558as RF probing signals576. In such examples the transponder504may not include the O/E converter554. The electronic probing signals574, or electronic probing signals572, can be baseband signals (e.g., generated by a central unit), up-converted baseband signals (up converted to intermediate frequencies (IFs) or radio frequencies (RFs)). In some cases, where the electronic probing signals574, or electronic probing signals572are baseband signals or based signals upconverted to IFs, the up-converter unit556may upconvert them using on or more RF carriers to generate one or more RF probing signals576. In some examples, the optical probing signals572, may comprise one or more optical carriers whose amplitude or phase are modulated by baseband signals, up-converted baseband signals (up converted to intermediate frequencies (IFs) or radio frequencies (RFs)). The antenna unit558may comprise one or more antennas configured to convert one or more RF probing signals to free space probing waves and receive one or more free space echo waves and convert them to echo RF echo signals578. In some cases, a downconverter unit560may then downconvert the one or more echo RF echo signals578to IF signals (one or more IF carriers modulated by baseband signals) or baseband signals to generate one or more echo signals580. In some cases, the one or more echo signals580may be directly transmitted to an intra-transponder multiplexer unit564(e.g., an electronic multiplexer unit). The intra-transponder multiplexer unit564may use the one or more echo signals582to generate a multiplexed echo signal584(an electronic multiplexed signal). In some other cases, an electrical-to-optical (E/O) converter562may convert the one or more echo signals580(e.g., electronic echo signals) to one or more optical echo signals582and send them to the intra-transponder multiplexer unit564(e.g., an optical multiplexer unit). The intra-transponder multiplexer unit564may use the one or more optical echo signals582to generate a multiplexed echo signal584(an optical multiplexed signal). The router550may then couple the multiplexed echo signal584to the link506. In some embodiments, for example the system shown inFIG.3A, a probing signal received by the transponder may not be a multiplexed probing signal. In these embodiments the transponder504may not include the intra-transponder demultiplexer552and the intra-transponder multiplexer564. In these embodiments, the plurality of probing signals572(electronic or optical) may be received directly from the router550and the plurality of echo signals (electronic or optical) may be transmitted directly to the router550. In some embodiments, where the multiplexed probing signal570and the multiplexed echo signal584are electrical signals, transponder504may not have an optical-to-electrical converter554or an electrical-to-optical converter562. In some embodiments, the transponder504may further comprise one or more analog-to-digital (A-to-D) and one or more digital-to-analog (D-to-A) converters. In some examples the one or more D-to-A converters may be used to convert one or more digital probing signals received from the central unit to one or more analog signals, and one or more A-to-D converters may be used to convert the one or more echo signals582or the multiplexed echo signal584generated by the transponder504to digital signals before transmitting them to the central unit. In some embodiments, the transponder504may further comprise one or more signal processing units (e.g., digital signal processing units) configured to transform one or more echo signals580generated by the transponder504to one or more compressed echo signals before multiplexing them or transmitting them to the central unit. In some examples, an information content of the one or more compressed echo signals may be identical to the one or more echo signals but the one or more compressed echo signals may require less bandwidth to be transmitted to the central unit. The information content may pertain information usable by the central unit to generate an object signal ort determine velocities and/or positions of one or more objects. In some other embodiments, the one or more signal processing units may be configured to transform the one or more echo signals580to one or more pre-processed echo signals where the one or more pre-processed echo signals may be used by the central processing unit to generate an object signal or determine the position and/or velocities of one or more objects. In some such embodiments, generating an object signal or determining the position and/or velocities of one or more objects using one or more pre-processed echo signals (instead of the one or more echo signals) may reduce an amount of processing or an amount of computational resources used by the central unit. FIG.6Aillustrates an example implementation of the centralized object detection sensor network system300adescribed above. The centralized object detection sensor network system600amay comprise a central unit601acoupled to a plurality transponders through a plurality of optical links where optical probing signals and optical echo signals are transmitted using separate links and the links are optical communication links. In this example, the central unit601aincludes a signal processing unit620that generates a plurality of baseband signals (electronic signals) and transmits them to a transmitter unit621. In some examples, the plurality of baseband signals may be digital electronic signals. The transmitter unit621may use the plurality of baseband signals to generate a plurality of probing signals that comprise a plurality of analog baseband signals. The plurality of probing signals may be transmitted to an optoelectronic unit631athat comprises a plurality of electrical-to-optical and optical-to-electrical converters. The optoelectronic unit631may convert the plurality of probing signals (electronic probing signals) to a plurality of optical probing signals. In some examples, an electrical-to-optical converter may comprise a directly modulated or an externally modulated laser. In some examples, an optical-to-electrical converter may comprise a photoreceiver (e.g., an amplified photodetector). Each optical probing signal may be transmitted to a transponder via an optical link (e.g., a Tx optical link). The transponder may generate a corresponding optical echo signal and transmit it back to the optoelectronic unit631avia another optical link (e.g., an Rx optical link). In the example shown, a first optical probing signal may be generated by directly modulating a laser654ain the optoelectronic unit631a. The first optical probing signal may be transmitted to a photoreceiver662ain a first transponder604via a first optical link606a. The photoreceiver662amay convert the first optical probing signal to an electronic probing signal (an electronic baseband signal). Subsequently, a first RF mixer680amay generate an RF probing signal by upconverting the probing signal using a local oscillator (LO)684that generates the corresponding RF carrier signal. In some examples, a first RF filter682amay be used to eliminate the spurious spectral components of the RF probing signal that may be generated during the upconversion process by the mixer680a. The resulting RF probing signal may be delivered to a first antenna unit658aconfigured to convert the RF probing signal to a free space probing wave. The corresponding free space echo wave may be received by another antenna658bunit that converts the free space echo wave to an RF echo signal (reflected by one or more objects). The RF echo signal may be filtered using a second RF filter682band down converted by a second RF mixer680b(fed by LO684) to generate a first echo signal (electronic signal) corresponding the first probing signal. A direct or external modulation of a laser654bin the first transponder604may convert the first echo signal to a first optical echo signal. The first optical echo signal may be transmitted to the central unit601avia a second optical link606b. A photoreceiver662amay convert the first optical echo signal to an analog reflected baseband signal and send it to a receiver unit623. The receiver unit may convert the analog reflected baseband signal to a digital reflected baseband signal that can be processed by the signal processing unit620. The signal processing unit may determined the velocities and positions of one or more objects and generate an object signal based at least in part on the digital reflected baseband signal. FIG.6Billustrates an alternative implementation of the centralized object detection sensor network system600adescribed above with respect toFIG.6Aand may comprise one or more features, components and functionalities described with respect to the centralized object detection sensor network system600a. The centralized object detection sensor network system600b, shown inFIG.6B, may comprise a central unit601bcoupled to a plurality transponders through a plurality of links where the probing signals and the echo signals are transmitted using separate links, which are optical communication links. The generation of probing signals and processing of the echo signals in the central unit601bof the system600bcan be similar to those described above with respect to the central unit601aof the system600a(FIG.6A). However, the optoelectronic unit631bof the system600bmay include an optical circulator (e.g., optical circulator686a) for each of pairs of lasers and photoreceivers. For example, the optical circulator686amay be configured to couple a first optical probing signal generated by a first laser654ato an optical link606c, and couple the corresponding optical echo signal from the same optical link606cto a first photoreceiver662a. Similarly, each transponder of the centralized object detection sensor network system600bmay include an optical circulator. For example, a first transponder607may include an optical circulator686bconfigured to couple the first optical probing signal received from the optical link606cto a photoreceiver662b(in the first transponder607) and couple the corresponding optical echo signal, generated by a laser654a, to the same optical link606c. FIG.6Cillustrates an example implementation of the centralized object detection sensor network system300bdescribed above. In certain embodiments, the centralized object detection sensor network system600ccan include one or more features of the embodiments previously described with respect to the centralized object detection sensor network systems300b(FIG.3B),300c(FIG.3C),600a(FIG.6A) and600b(FIG.6B) described above, the details of which may be omitted herein for brevity. The centralized object detection sensor network system600cmay comprise a central unit609coupled to a plurality transponders through a plurality of links where each link is an optical communication link. In this example, central unit609includes a signal processing unit623that generates a plurality of baseband signals (digital or analog) comprising of a plurality baseband signal groups where each baseband signal group is generated for a transponder of the object detection sensor network system600c. The optical transceiver unit690may be configured to receive and convert the plurality of baseband signals to a plurality of multiplexed optical probing signals where each multiplexed optical probing signal comprises a group of baseband signals generated for a transponder. The plurality of the multiplexed optical probing signals may be transmitted via a plurality of optical links to a plurality of the transponders where each optical link is connected to one transponder. For example, a first optical link may deliver a first multiplexed optical signal to a first transponder611. Each transponder (e.g., the first transponder611) may include an optical transceiver unit (e.g., optical transceiver unit692) that receives a multiplexed optical probing signal, comprising a group of baseband signals associated with the transponder (e.g., transponder611), and generates a plurality of electronic probing signals. In some cases, the plurality of electronic probing signals may be IF signals or baseband signals. The plurality of electronic probing signals may be upconverted to a plurality of RF probing signal using a mixer680a, an LO and a filter682a. An antenna unit of the transponder (e.g., antenna unit658a) may use the plurality of RF probing signals to generate a plurality of free space probing waves and the corresponding plurality of RF echo signals (electronic signals). The plurality of RF echo signals may be downconverted to a plurality or electronic echo signals (e.g., each using a filter682b, an LO, and a mixer680b) and sent to the optical transceiver692. The optical transceiver692may use the plurality of electronic echo signals to generate a multiplexed optical echo signal and transmit it to the central unit609via the same optical link from which the multiplexed probing signal is received. In some examples, the RF echo signals may be generated by a separate antenna unit (e.g., antenna unit658b). The optical transceiver unit690receives a plurality of multiplexed optical echo signals (each from a transponder) and generates a plurality of a reflected baseband signals (electronic signals). The plurality of reflected baseband signals may comprise a one or more echo signal groups where each echo signal group corresponds to a multiplexed optical echo signal received from a transponder. The optical transceiver unit690transmits the plurality of reflected baseband signals (digital or analog signals) to signal processing unit623where they are used to extract information pertaining the position and/or velocity of one or more objects in an environment monitored by the centralized object detection sensor network system600c. In some embodiments, the optical transceiver unit690or692may include: one or more of lasers, photoreceivers, optical multiplexers and optical demultiplexers. In some embodiments, the optical transceiver unit690may also include a plurality of digital to analog converters. FIG.7illustrates an example implementation of the central unit303used in the centralized object detection sensor network system300cdescribed above with respect toFIG.3C. In certain embodiments, the central unit703can include one or more features of the embodiments previously described with respect to the central units302(FIG.3B) and303(FIG.3C) described above. In this example, the central unit703includes a signal processing unit719that generates a plurality of digital baseband signals734and sends them to a transmitter unit723. The transmitter unit723may convert the plurality of digital baseband signals734to a plurality of analog baseband signals (analog electronic signals) and subsequently convert them to a plurality of analog optical signals. In the example shown, three digital baseband signals are received by a digital-to-analog converter723ain the transmitter unit723. The digital-to-analog converter723converts these signals to three analog baseband signals and feeds them to three lasers723bhaving three different wavelengths (λ1, λ2, and λ3). Each analog baseband signal may directly modulate a laser (e.g., an amplitude or phase of the laser output) and generate three optical probing signals738each having a different optical wavelength. Each optical probing signal may comprise an optical signal whose phase or amplitude is modulates by one of the analog baseband signals generated by the digital-to-analog converter723a. The optical probing signals738may be transmitted to an optical multiplexer726a(intra-transponder multiplexer) that combines them as one multiplexed optical signal741. An optical splitter (or optical power divider may) may receive the optical probing signal and generate a plurality of secondary multiplexed optical probing signals742where each secondary optical probing signal is a copy of the multiplexed optical signal received by the coupler726b. The number of secondary multiplexed optical probing signals may equal to the number of transponders fed by the central unit702. The plurality of secondary multiplexed optical probing signals742are transmitted to a router unit732where each secondary multiplexed optical probing signal is received by a plurality of circulators and are coupled to the links106(optical links) that are connected to the transponders104fed by the central unit702. For example, a first secondary multiplexed optical probing signal may be coupled to a first transponder by the circulator786to a first link. The first transponder may be a transponder in the centralized object detection sensor network system300c. The first transponder may include one or more of the embodiments previously described with respect to the transponder504described above with respect toFIG.5. The first transponder may receive the first secondary multiplexed optical probing signal and generate three echo signals (electronic signals) each corresponding to one of the analog baseband signals generated by the digital-to-analog converter723. In some examples, each echo signal may comprise a reflected baseband signal carried by an intermediate frequency (IF) carrier (e.g., amplitude or phase of the IF carrier may be modulated by the echo signal). The frequency of the IF carrier may be different for each echo signal, for example, three IF carriers having frequencies IF1, IF2 and IF3), and each may be modulated by one of the three reflected baseband signal. In some examples, the first transponder may convert the echo signals to optical echo signals (e.g., using the E/O converter562(FIG.5)) and multiplex them as a single multiplexed optical echo signal (e.g., using the intra-transponder multiplexer564and based on wavelength multiplexing). In some examples, the first multiplexed optical echo signal may be generated using the wavelengths used for generating the corresponding multiplexed optical signal741by the transmitter unit723. The first multiplexed optical echo signal is then transmitted to the central unit702via the first optical link. The circulator786couples the first multiplexed optical echo signal to a receiver unit725of the central unit702. In some embodiments, the central unit702may not include an intra-transponder demultiplexer unit (unlike the central unit303(FIG.3C)). In some such embodiments, the receiver unit725, may include a plurality of photoreceivers725beach configured to receive a multiplexed optical echo signal from the router732and convert it to a plurality of reflected baseband signals (electronic signals). The photoreceiver may first convert the multiplexed optical echo signal to a multiplexed electronic echo signal. The multiplexed electronic signal may comprise a plurality of IF carriers each modulated by one of the plurality of reflected baseband signals. The photoreceiver may further include electronic components and circuitry to generate the plurality of reflected baseband signals (individual signals) using the multiplexed electronic signal. In the example shown, a first photoreceiver may receive the first multiplexed optical echo signal and generate three reflected baseband signals (electronic signals) corresponding to the three baseband signals used to generate the three optical probing signals738. The photoreceiver may comprise a first photodetector, an electronic frequency demultiplexer, and three down-converters. The photodetector is configured to convert the first multiplexed optical echo signal to a multiplexed electronic signal comprising three IF carriers, IF1, IF2, and IF3 each modulated (e.g., amplitude of phase modulated) by one of the three reflected baseband signals. The electronic demultiplexer may be configured to generate three IF signals each comprising one of the IF carriers modulated by one of the reflected basebands. Subsequently, the three down-converters each generate one reflected baseband signal (e.g., by mixing the IF signal with the corresponding IF carrier). The centralized sensor network systems described above can be different embodiments of a centralized sensor network system for detecting object using radio waves. As such, the free space probing waves and free space echo waves discussed above may comprise radio waves having frequencies between 1 MHz to 100 GHz. In some embodiments, some of the features, devices, links, units, components, methods and processes described with respect to sensor network systems300a(FIG.3A) and300b(FIG.3B) may be used to detecting objects using light waves. FIG.8illustrates another example of a centralized sensor network system800. The illustrated system integrates lidar and radar sensing subsystems in a centralized system where certain units are shared between the two subsystems. Advantageously, the illustrated centralized lidar-radar sensing system allows multi-modal (e.g., based on optical and radio frequency waves) detection and ranging using one central unit808and a plurality of hybrid transponders (e.g., lidar-radar transponders configured to generate and detect both light wave and radio waves). The centralized object detection sensor network system800can include one or more features of embodiments previously described with respect to various centralized object detection sensor network systems described above, the details of which may not be repeated herein for brevity. The illustrated centralized sensor network system800uses a single central unit808connected to a one or more lidar-radar transponders804through a plurality of communication links806(herein referred to as link). The central unit808may be configured to generate and process probing signals configured for detection of objects based on free space light waves, herein referred to as lidar probing signals, and probing signals configured for detection of objects based on radio waves, herein referred to as radar probing signals. In some embodiments, a portion of the links can be optical communication links and the rest can be electronic or RF communication links. Each lidar-radar transponder may be connected to the central unit808via one or more links. In some cases, the links connecting a lidar-radar transponder to the central unit may comprise both RF/electronic and optical communication links. The central unit808transmits one or more radar probing signals and one or more lidar probing signals to each lidar-radar transponder and receives a one or more radar echo signals and one or more lidar echo signals from each lidar-radar transponder. In some embodiments, the lidar or radar probing signals and the lidar or radar echo signals associated with a lidar-radar transponder may be transmitted and received via a single link. In some other embodiments, each lidar-radar transponder may be connected to the central unit800via a single link used for transmitting all probing signals and echo signals associated the lidar-radar transponder. The central unit808includes a signal processing unit819comprising a signal processing unit808(shared between lidar and radar subsystems), a radar central unit808a, a lidar central unit808band a router unit832. The signal processing unit819generates a plurality of radar baseband signals834aand a plurality of lidar baseband signals834b. The plurality of radar baseband signals834amay be transmitted to a radar central unit808aand the plurality of the plurality of lidar baseband signals834bmay be transmitted to a lidar central unit808b. The plurality of the radar baseband signals may comprise the plurality of baseband signals described above. The router unit832is configured to couple the radar or lidar probing signals from the radar central unit808aor lidar central unit808bto the plurality of links806, and couple the radar or lidar echo signals the one or more lidar-radar transponders804received from the plurality of links806, to the radar central unit808aor the lidar central unit808b. The router may include one or more optical or RF/electronic routers. The radar central unit808amay be configured to use the plurality of the radar baseband signals to generate one or more radar probing signals842aand transmit them or the router unit832. In some embodiments, the radar central unit808amay comprise one or more units included in the central unit302(FIG.3B) or the central unit303(FIG.3C). In some cases, one or more radar probing signals may comprise one or more multiplexed radar probing signals (e.g., each comprising a plurality of radar probing signals multiplexed as one signal). The router832couples the radar probing signals associated with a lidar-radar transponder through a link connected to the lidar-radar transponder, for example, a first link806aconnected to a first lidar-radar transponder804aof the plurality of lidar-radar transponders shown in centralized sensor network system800. In some examples, the link806amay be a bidirectional link. In some examples, the link805amay comprise two or more links (e.g., one or more links for communicating radar signals and one or more links for communicating lidar signals). The transponder804may comprise a router unit895, a radar transponder804a, and a lidar transponder804b. The router895couples the radar probing signals received from the link806ato the radar transponder804aand couples back one or more radar echo signals generated by the radar transponder804ato the link806a. The radar transponder may convert the one or more radar probing signals to one or more free space radar probing waves, direct them to an environment and convert one or more free space radar echo waves to one or more radar echo signals. The free space radar echo waves may be reflections of one or more free space radar probing waves by one or more objects in an environment probed by the transponder804a. In some examples, the one or more radar echo signals may comprise one or more multiplexed radar echo signals. The router832may couple the radar echo signals844areceived from the link806ato the radar central unit808a. The radar central unit808amay use the radar echo signal844ato generate one or more reflected baseband signals836aand send them to the signal processing unit819. The signal processing unit819may use the one or more baseband signals834aand the one or more reflected baseband signals836ato determine positions and/or velocities of the one or more objects. In some examples, the signal processing unit819may use the one or more baseband signals834aand the one or more reflected baseband signals836ato generate one or more radar object signals usable for determining velocities and locations of the one or more objects. The lidar central unit808bmay be configured to use the plurality of the lidar baseband signals to generate one or more lidar probing signals842band transmit them or the router unit832. In some embodiments the lidar central unit808bmay comprise one or more units included in the central unit302(FIG.3B) or central unit303(FIG.3C). In some cases, one or more lidar probing signals may comprise one or more multiplexed lidar probing signals (e.g., each comprising a plurality of lidar probing signals multiplexed as one signal). The router832couples the lidar probing signals associated with a lidar-radar transponder through a link connected to the lidar-radar transponder, for example, a first link806aconnected to a first lidar-radar transponder804aof the plurality of lidar-radar transponders shown in centralized sensor network system800. In some examples, the link806amay be a bidirectional link. In some examples, the link805amay comprise two or more links (e.g., one or more links for communicating radar signals and one or more links for communicating lidar signals). The router895in the first lidar-radar transponder804bcouples the lidar probing signals received from the link806ato the lidar transponder804band couples back one or more lidar echo signals generated by the lidar transponder804bto the link806a. The lidar transponder804bmay convert the one or more lidar probing signals to one or more free space lidar probing waves (light waves), direct them to an environment and convert one or more free space lidar echo waves (reflected light waves) to one or more lidar echo signals. The free space lidar echo waves may be reflections of one or more free space lidar probing waves by one or more objects in the environment probed by the transponder804b. In some examples, the one or more lidar echo signals may comprise one or more multiplexed radar echo signals. The router832may couple the lidar echo signals844breceived from the link806ato the lidar central unit808b. The lidar central unit808bmay use the lidar echo signal844bto generate one or more reflected lidar baseband signals836band send them to the signal processing unit819. The signal processing unit819may use the one or more lidar baseband signals834band the one or more reflected lidar baseband signals836bto determine positions and/or velocities of the one or more objects. In some examples, the signal processing unit819may use the one or more lidar baseband signals834band the one or more reflected lidar baseband signals836bto generate one or more lidar object signals usable for determining velocities and locations of the one or more objects. In some embodiments, the central unit819may use a combination of the radar baseband signals, the reflected radar baseband signals, the lidar baseband signals and the reflected lidar baseband signals to determine positions and/or velocities of the one or more objects or to generate one or more hybrid object signals usable for determining velocities and locations of the one or more objects. Advantageously, in these embodiments, the positions and/or velocities of the one or more objects can be determined with higher resolution and more accuracy. In addition, hybrid object signals may be used to extract certain information related to the one or more objects or the environment that may not be extractable from radar object signals or lidar object signals separately. In various embodiments, the central unit may determine the velocities and the positions of one or more objects based at least in part on one or more reflected baseband signals generated using one or more echo signals received from the one or more transponders. In some cases, the central unit may determine the velocities and the positions of the one or more objects based at least in part on one or more reflected baseband signals generated using one or more echo signals received from the one or more transponders and one or more base band signals associated with the reflected baseband signals. In some other cases, the central unit may determine the velocities and/or the positions of the one or more objects based at least in part on one or more reflected baseband signals generated using one or more echo signals received from the one or more transponders and one or more parameter values associated with one or more signal generators that generate the one or more baseband signals associated with the reflected baseband signals. The central unit may determine the velocities of the one or more objects with respect to a reference frame. For example, the reference frame may be a rest frame of the transponders used to detect the objects (e.g., by sending probing signals and receiving echo signals). The rest frame of the transponders may a reference frame in which the coordinate of the transponders do not change over time. As described above, in various embodiments a central unit of a centralized sensor network system may generate an object signal based at least in part on one or more reflected baseband signals generated using the one or more echo signals received from the plurality of transponders of the centralized sensor network system.FIG.9illustrates an example centralized sensor network systems with a central unit902that generates an object signal905. The central unit902may transmit the object signal907to an external processing system908. The external processing system908may use the object signal905to determine the velocities and/or positions of the one or more objects detected by the centralized sensor network system900. The velocities and/or positions may be determined with respect to the one or more transponders used to detect the one or more objects. The external processing system908may include one or more memories, one or more processors. The one or more processors may be configured to determine the velocities and the positions of the one or more objects by executing machine readable instructions stored in the one or more memories. The external processing system may transmit determined velocities and/or positions of the one or more objects to a user interface910or a digital interface912. The user interface910may be configured to display the determined velocities and the positions along with other information related to the detection of the one or more objects (e.g., location of the transponders, uncertainties associated with the determination process, information about probing signals and free space probing wave used to detect the one or more objects and the like). The digital interface912may be configured to transmit the determined velocities and/or positions and other information related to the detection of the one or more objects, to one or more electronic devices (e.g., the electronic devices: of an automobile, an airplane, a factory, a user, and the like). In some examples, the object signal may carry data (e.g., raw data) extracted from one or more reflected baseband signals by a signal processing unit of the central unit. The extracted data may be raw data, compressed raw data or processed raw data (e.g., processed by the signal processing unit of the central unit). In some cases, compressed raw data may comprise the information usable for determining velocities or positions of one or more detected objects but may be smaller compared to raw data and require less bandwidth to be transmitted to the processing system908within a given time interval. In some cases, the processed raw data comprises the same information as raw data and can be used to determine the velocities and/or positions of the one or more object detected by the centralized sensor network systems. In some examples, the external processing system908may need less computational power or resources to determine the velocities and/or positions of the one or more objects using an object signal comprising processed raw data as compared to using an object signal comprising raw data or compressed raw data. In some examples, the object signal may additionally carry data associated with the baseband signals used to generate the reflected baseband signals. In yet other examples, the object signal may carry data associated with algorithms, signal generators and/or parameter values used by the signal processing unit of the central unit to generate the one or more baseband signals. In some cases, the object signal may be a multiplexed signal generated by a plurality of signals each usable for determining the velocity or positions of one object of the plurality of objects detected by the centralized sensor network system. EXAMPLE APPLICATIONS In some embodiments, one or more of the centralized sensor network systems described above can be used to monitor the position and/or the velocities of one or more objects in an environment surrounding an automobile.FIG.10shows a centralized sensor network system (e.g., centralized sensor network system100,200,300bor800) used for such application. In this example, the centralized sensor network system a central unit1002is communicatively connected to 8 transponders1004a-1004hinstalled at different sites on the automobile, with minimum impact on the vehicle's external envelope. In some examples, one or more of the transponders1004a-1004hmay be replaced by a sensor (e.g., camera, acoustic sensor, and the like). In some embodiments, one or more of the centralized sensor network systems described above can be used to monitor the position and/or the velocities of one or more objects in an environment surrounding an aircraft. In such embodiments, a plurality of transponders may be attached to various positions on the aircraft body and enable detecting objects at any arbitrary direction. The one or more centralized sensor network systems may comprise, for example, the centralized sensor network system100,200,300bor800described above. In some embodiments, one or more of the centralized sensor network systems described above can be used to monitor an environment, for example, an open (or outdoor) field. The open field can be an airport, a parking lot, a stadium, a city street intersection, an animal farm, a wind turbine field, a sea coastal area. The one or more centralized sensor network systems may comprise, for example, the centralized sensor network system100,200,300bor800described above. In some embodiments, one or more of the centralized sensor network systems described above can be used to monitor an indoor environment room, a factory floor, a factory assembly line or conveyor belts, or a warehouse. The one or more centralized sensor network systems may comprise, for example, the centralized sensor network system100,200,201,300bor800described above. ADDITIONAL EXAMPLES Group-I 1. A centralized object detection sensor network system for detecting one or more objects in an environment, comprising:a central unit configured to:generate one or more baseband signals,generate one or more probing signals using the one or more baseband signals and transmit the one or more probing signals to one or more transponders,receive one or more echo signals from the one or more transponders, anddetect the one or more objects the using one or more baseband signals and the one or more echo signals,wherein the one or more transponders are physically separate from the central unit while being communicatively coupled thereto through one or more communication links, and wherein the one or more transponders are configured to:receive the one or more probing signals from the central unit through the one or more communication links,generate one or more radio frequency (RF) probing signals using the one or more probing signals and an RF carrier signal,convert the one or more RF probing signals into free space probing waves,direct the free space probing waves to the environment for detecting the one or more objects,receive one or more free space echo waves from the one or more objects, andgenerate the one or more echo signals using the free space echo waves and transmit to the one or more echo signals to the central unit through the one or more communication links.2. The sensor network system of the above Embodiment, wherein the sensor network system comprises a plurality of transponders coupled to the central unit and wherein the central unit serves as a common central unit for detecting the one or more objects based on the one or more echo signals generated by the plurality of transponders.3. The sensor network system of any of the above Embodiments, wherein the sensor network system is configured to detect the one or more objects using one or more sensing modalities, wherein different ones of the transponders are configured for same or different sensing modalities.4. The sensor network system of any of the above Embodiments, wherein the one or more modalities include at least radar sensing.5. The sensor network system of any of the above Embodiments, wherein the one or more modalities include one or more of radar sensing, lidar sensing, imaging, kinematic sensing and position sensing.6. The sensor network system of any of the above Embodiments, wherein the central unit and the one or more transponders are physically separated by a distance of about a centimeter to a kilometer.7. The sensor network system of any of the above Embodiments, wherein detecting the one of more objects comprises determining distances between the one or more transponders and the one or more objects.8. The sensor network system of any of the above Embodiments, wherein detecting the one of more objects comprises determining velocities of the one or more objects with respect to the one or more transponders.9. The sensor network system of any of the above Embodiments, wherein the central unit comprises:a signal processing unit configured to generate the one or more baseband signals;a transmitter coupled to the signal processing unit and configured to receive the one or more baseband signals from the signal processing unit to generate the one or more probing signals; anda receiver coupled to the signal processing unit and configured to receive the one or more echo signals to generate one or more reflected baseband signals and feed the one or more reflected baseband signals to the signal processing unit.10. The sensor network system of any of the above Embodiments, wherein the central unit comprises a router configured to couple the one or more probing signals to the one or more communication links, and to couple the one or more echo signals, received from the one or more communication links, to the receiver.11. The sensor network system of any of the above Embodiments, wherein the one or more transponders generate the one or more echo signals by downconverting one or more RF echo signals.12. The sensor network system of any one of any of the above Embodiments, wherein the one or more probing signals comprise one or more baseband signals and the one or more echo signals comprise one or more reflected baseband signals.13. The sensor network system of any of the above Embodiments, wherein the one or more probing signals comprise one or more intermediate frequency (IF) signals comprising intermediate frequency carriers modulated by baseband signals, the one or more echo signals comprise one or more intermediate frequency (IF) signals comprising intermediate frequency carriers modulated by reflect baseband signals.14. The sensor network system of any of the above Embodiments, wherein the one or more probing signals comprise a baseband signal and the one or more echo signals comprise one or more IF signals comprising intermediate frequency carriers modulated by one or more baseband signals.15. The sensor network system of any of the above Embodiments, wherein the one or more probing signals comprise one or more multiplexed probing signals and the one or more echo signals comprise one or more multiplexed echo signals.16. The sensor network system of any of the above Embodiments, wherein the one or more probing signals and/or the one or more echo signals comprise digital signals.17. The sensor network system of any of the above Embodiments, wherein at least one of the probing signals comprise an optical probing signal comprising an optical carrier modulated by the probing signal.18. The sensor network system of any of the above Embodiments, wherein the at least one of the one or more echo signals comprise an optical echo signal comprising an optical carrier modulated by the echo signal.19. The sensor network system of any of the above Embodiments, wherein the central unit comprises one or more electrical-to-optical converters configured to convert electrical probing signals to one or more optical probing signals and one or more electrical-to-optical converters configured to convert one or more optical echo signals to electrical echo signals.20. The sensor network system of any of the above Embodiments, wherein at least one of the one or more transponders comprises: one or more electrical-to-optical converters configured to convert electrical probing signals to the one or more optical probing signals, and one or more electrical-to-optical converters configured to convert the one or more optical echo signals to electrical echo signals.21. The sensor network system of any of the above Embodiments, wherein the one or more multiplexed probing signals and/or the one or more multiplexed echo signals are wavelength multiplexed signals.22. The sensor network system of any of the above Embodiments, wherein at least one communication link comprises an electrical link or a RF link.23. The sensor network system of any of the above Embodiments, wherein at least one communication link comprises an optical link.24. The sensor network system of any of the above Embodiments, wherein at least one transponder comprises an antenna unit, wherein the antenna unit comprises one or more RF antennas or optical antennas.25. The sensor network system of any of the above Embodiments, wherein at least one free space probing wave and at least one free space echo wave are free space radio waves.26. The sensor network system of any of the above Embodiments, wherein at least one free space probing wave and at least one free space echo wave are free space light waves.27. The sensor network system of any of the above Embodiments, wherein the antenna unit comprises at least one optical antenna configured to generate and/or receive light waves.28. The sensor network system of any of the above Embodiments, wherein the central unit comprises a radar central unit configured for detecting the one or more objects by radar sensing and a lidar central unit configured for detecting the one or more objects by lidar sensing.29. The sensor network system of any of the above Embodiments, wherein the radar central unit and the lidar central unit share one or more of a common signal procession unit, a common router, and common links between the central unit and the one or more transponders.30. The sensor network system of any of the above Embodiments, wherein the probing signals and the echo signals are communicated between a transponder of the one or more transponders and the central unit via a bidirectional communication link.31. The sensor network system of any of the above Embodiments, wherein the one or more probing signals and the one or more echo signals are communicated between a transponder of the one or more transponders and the central unit via a two separate communication links.32. The sensor network system of any of the above Embodiments, wherein the signal processing unit comprises at least a digital signal processing unit.33. The sensor network system of any of the above Embodiments, wherein the central unit comprises one or more digital-to-analog converters and analog-to-digital converters.34. The sensor network system of any of the above Embodiments, wherein the central unit is configured to use the one or more probing signals and one or more echo signals to generate an object signal usable for determining distances between the one or more transponders and the one or more objects, and/or for determining velocities of the one or more objects with respect to the one or more transponders.35. The sensor network system of any of the above Embodiments, wherein the sensor network system is installed as part of an autonomous vehicle.36. The sensor network system of any of the above Embodiments, wherein the free space probing waves are continuous waves. Group II 1. A centralized object detection sensor network system for detecting one or more objects in an environment, comprising:a central unit configured to:generate one or more baseband signals,generate one or more probing signals using the one or more baseband signals and transmit the one or more probing signals to one or more transponders,receive one or more echo signals from the one or more transponders, anddetect the one or more objects the using the one or more baseband signals and the one or more echo signals,wherein the one or more transponders are physically separate from the central unit while being communicatively coupled thereto through one or more communication links, andwherein the one or more transponders are configured to:receive the one or more probing signals from the central unit through the one or more communication links and generate free space probing waves therefrom,direct the free space probing waves to the environment for detecting the one or more objects,receive one or more free space echo waves from the one or more objects, andgenerate the one or more echo signals using the free space echo waves and transmit the one or more echo signals to the central unit through the one or more communication links.2. The sensor network system of the above Embodiment, wherein the central unit comprises:a signal processing unit configured to generate the one or more baseband signals;a transmitter coupled to the signal processing unit and configured to receive the one or more baseband signals from the signal processing unit to generate the one or more probing signals; anda receiver coupled to the signal processing unit and configured to receive the one or more echo signals to generate one or more reflected baseband signals and feed the one or more reflected baseband signals to the signal processing unit.3. The sensor network system of any of the above Embodiments, wherein the central unit comprises a router configured to couple the one or more probing signals to the one or more communication links, and to couple the one or more echo signals, received from the one or more communication links, to the receiver.4. The sensor network system of any one of the above Embodiments, wherein the one or more probing signals comprise the one or more baseband signals and the one or more echo signals comprise the one or more reflected baseband signals.5. The sensor network system of any of the above Embodiments, wherein the one or more probing signals comprise one or more intermediate frequency (IF) signals comprising intermediate frequency carriers modulated by the one or more baseband signals, and the one or more echo signals comprise one or more intermediate frequency (IF) signals comprising intermediate frequency carriers modulated by the one or more reflected baseband signals.6. The sensor network system of any of the above Embodiments, wherein the one or more probing signals comprise the one or more baseband signals and the one or more echo signals comprise one or more IF signals comprising intermediate frequency carriers modulated by the one or more reflected baseband signals.7. The sensor network system of any of the above Embodiments, wherein at least one of the one or more probing signals comprises an optical probing signal comprising an optical carrier modulated by one of the probing signals.8. The sensor network system of any of the above Embodiments, wherein at least one of the one or more echo signals comprises an optical echo signal comprising an optical carrier modulated by one of the echo signals.9. The sensor network system of any one of the above Embodiments, wherein the central unit comprises one or more electrical-to-optical converters configured to convert electrical probing signals to one or more optical probing signals and one or more electrical-to-optical converters configured to convert one or more optical echo signals to electrical echo signals.10. The sensor network system of any one of the above Embodiments, wherein at least one of the one or more transponders comprises: one or more electrical-to-optical converters configured to convert electrical probing signals to the one or more optical probing signals, and one or more electrical-to-optical converters configured to convert the one or more optical echo signals to electrical echo signals.11. The sensor network system of any one of the above Embodiments, wherein the sensor network system comprises a plurality of transponders coupled to the central unit, and wherein the central unit serves as a common central unit for detecting the one or more objects based on the one or more echo signals generated by the plurality of transponders.12. The sensor network system of any one of the above Embodiments, wherein at least one communication link comprises an electrical link or a RF link.13. The sensor network system of any one of the above Embodiments, wherein at least one communication link comprises an optical link.14. The sensor network system of any one of the above Embodiments, wherein at least one free space probing wave and at least one free space echo wave are free space radio waves.15. The sensor network system of any one of the above Embodiments, wherein the probing signals and the echo signals are communicated between a transponder of the one or more transponders and the central unit via a bidirectional communication link.16. The sensor network system of any one of the above Embodiments, wherein the one or more probing signals and the one or more echo signals are communicated between a transponder of the one or more transponders and the central unit via a two separate communication links.17. The sensor network system of any one of the above Embodiments, wherein the central unit is configured to detect the one of more objects at least in part by determining distances between the one or more transponders and the one or more objects.18. The sensor network system of any one of the above Embodiments, wherein the central unit is configured to detect the one of more objects at least in part by determining velocities of the one or more objects with respect to the one or more transponders.19. The sensor network system of any one of any of the above Embodiments wherein the sensor network system comprises a plurality of transponders coupled to the central unit, and wherein the transponders may comprise on or more digital signal processing unit to generate an object signal usable for determining distances between the one or more transponders and the one or more objects, and/or for determining velocities of the one or more objects with respect to the one or more transponders. Group III 1. A centralized object detection sensor network system for detecting one or more objects in an environment, comprising:a central unit communicatively coupled to one or more transponders through one or more communication links, wherein the one or more transponders are physically separate from the central unit,wherein the central unit is configured to:generate one or more multiplexed probing signals and transmit at least a multiplexed probing signal of the one or more multiplexed probing signals to a transponder of the one or more transponders,receive a multiplexed echo signal from the transponder, anddetect the one or more objects using one or more reflected baseband signals, wherein the baseband signals are generated using the multiplexed echo signal, andwherein the transponder is configured to:receive the multiplexed probing signal from the central unit through a communication link of the one or more communication links,generate one or more radio frequency (RF) probing signals using the multiplexed probing signal and one or more RF carrier signals,convert the one or more RF probing signals into free space probing waves,direct the free space probing waves to the environment for detecting the one or more objects,receive one or more free space echo waves from the one or more objects, andgenerate the multiplexed echo signal using the free space echo waves and transmit to the multiplexed echo signal to the central unit through the communication link.2. The sensor network system of the above Embodiment, the communication link is a bidirectional communication link.3. The sensor network system of any of the above Embodiments, wherein the one or more multiplexed probing signals comprise one or more optical probing signals and the one or more multiplexed echo signals comprise one or more optical echo signals.4. The sensor network system of any of the above Embodiments wherein the one or more multiplexed probing signals and/or the one or more multiplexed echo signals are wavelength multiplexed signals.5. The sensor network system of any of the above Embodiments, wherein the free space probing waves are continuous waves (CW).6. The sensor network system of any of the above Embodiments, wherein the central unit comprises one or more digital-to-analog converters and analog-to-digital converters.7. The sensor network system of any of the above Embodiments, wherein the signal processing unit comprises at least a digital signal processing unit.8. The sensor network system of any of the above Embodiments, wherein the central unit is configured to use the one or more probing signals and one or more echo signals to generate an object signal usable for determining distances between the one or more transponders and the one or more objects, and/or for determining velocities of the one or more objects with respect to the one or more transponders.9. The sensor network system of any of the above Embodiments, wherein the sensor network system is configured to detect the one or more objects using one or more sensing modalities, wherein different ones of the transponders are configured for same or different sensing modalities.10. The sensor network system of any of the above Embodiments, wherein the one or more modalities include one or more of radar sensing, lidar sensing, imaging, kinematic sensing and position sensing.11. The sensor network system of any of the above Embodiments, wherein the one or more modalities include at least radar sensing.12. The sensor network system of any of the above Embodiments, wherein the central unit and the one or more transponders are physically separated by a distance between a centimeter to one hundred kilometer.13. The sensor network system of any of the above Embodiments, wherein the one or more optical probing signals and/or the one or more optical echo signals comprise one or more optical carriers modulated with digital signals.14. The sensor network system of any one of the Embodiments, wherein the sensor network system is installed as part of an autonomous vehicle. Group IV 1. A centralized object detection sensor network system for detecting one or more objects in an environment, comprising a central unit communicatively coupled to one or more lidar-radar transponders through one or more communication links, wherein the one or more lidar-radar transponders are physically separate from the central unit, and wherein the central unit is configured to:generate one or more radar probing signals and one or more lidar probing signals,transmit the one or more radar probing signals and the one or more lidar signals to the one or more lidar-radar transponders,receive one or more radar echo signals and one or more lidar echo signals from the one or more lidar-radar transponders, anddetect the one or more objects based at least in part on the one or more radar echo signals and the one or more lidar echo signals.2. The sensor network system of the above embodiment wherein at least a lidar-radar transponder of the one or more lidar-radar transponders comprises a lidar transponder configured to generate lidar probing waves using a radar transponder configured to generate radar probing waves, where in lidar probing waves comprise light waves and radar probing wave comprise radio waves.3. The sensor network system of any of the above Embodiments, wherein the at least one lidar-radar transponder comprises an antenna unit, wherein the antenna unit comprises one or more RF antennas and one or more optical antennas configured to send and receive the light waves.4. The sensor network system of any one of any of the above Embodiments, wherein the central unit comprises a radar central unit configured for detecting the one or more objects by radar sensing and a lidar central unit configured for detecting the one or more objects by lidar sensing.5. The sensor network system of any of the above Embodiments, wherein the radar central unit and the lidar central unit share one or more of a common signal processing unit, a common router, and common links between the central unit and the one or more transponders.6. The sensor network system of any of the above Embodiments, wherein the central unit detects the one or more objects by determining distances between the one or more transponders and the one or more objects, and/or velocities of the one or more objects with respect to the one or more transponders or by generating an object signal usable for determining distances between the one or more transponders and the one or more objects, and/or for determining velocities of the one or more objects with respect to the one or more transponders, base at least in part the one or more lidar echo signals and the one or more radar echo signals. Terminology It will be appreciated that each of the processes, methods, and algorithms described herein and/or depicted in the figures may be embodied in, and fully or partially automated by, code modules executed by one or more physical computing systems, hardware computer processors, application-specific circuitry, and/or electronic hardware configured to execute specific and particular computer instructions. For example, computing systems may include general purpose computers (e.g., servers) programmed with specific computer instructions or special purpose computers, special purpose circuitry, and so forth. A code module may be compiled and linked into an executable program, installed in a dynamic link library, or may be written in an interpreted programming language. In some embodiments, particular operations and methods may be performed by circuitry that is specific to a given function. Further, certain embodiments of the functionality of the present disclosure are sufficiently mathematically, computationally, or technically complex that application-specific hardware or one or more physical computing devices (utilizing appropriate specialized executable instructions) may be necessary to perform the functionality, for example, due to the volume or complexity of the calculations involved or to provide results substantially in real-time. For example, an input data may include sensor data collected at very short time intervals, with large amounts of data collected at each time interval. As such, a specifically programmed computer hardware may be necessary to process the input data in a commercially reasonable amount of time. Code modules or any type of data may be stored on any type of non-transitory computer-readable medium, such as physical computer storage including hard drives, solid state memory, random access memory (RAM), read only memory (ROM), optical disc, volatile or non-volatile storage, combinations of the same and/or the like. In some embodiments, the non-transitory computer-readable medium may be part of one or more of the local processing and data module, the remote processing module, and remote data repository. The methods and modules (or data) may also be transmitted as generated data signals (e.g., as part of a carrier wave or other analog or digital propagated signal) on a variety of computer-readable transmission mediums, including wireless-based and wired/cable-based mediums, and may take a variety of forms (e.g., as part of a single or multiplexed analog signal, or as multiple discrete digital packets or frames). The results of the disclosed processes or process steps may be stored, persistently or otherwise, in any type of non-transitory, tangible computer storage or may be communicated via a computer-readable transmission medium. Any processes, blocks, states, steps, or functionalities in flow diagrams described herein and/or depicted in the attached figures should be understood as potentially representing code modules, segments, or portions of code which include one or more executable instructions for implementing specific functions (e.g., logical or arithmetical) or steps in the process. The various processes, blocks, states, steps, or functionalities may be combined, rearranged, added to, deleted from, modified, or otherwise changed from the illustrative examples provided herein. In some embodiments, additional or different computing systems or code modules may perform some or all of the functionalities described herein. The methods and processes described herein are also not limited to any particular sequence, and the blocks, steps, or states relating thereto may be performed in other sequences that are appropriate, for example, in serial, in parallel, or in some other manner. Tasks or events may be added to or removed from the disclosed example embodiments. Moreover, the separation of various system components in the embodiments described herein is for illustrative purposes and should not be understood as requiring such separation in all embodiments. It should be understood that the described program components, methods, and systems may generally be integrated together in a single computer product or packaged into multiple computer products. In the foregoing specification, the invention has been described with reference to specific embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention. The specification and drawings are, accordingly, to be regarded in an illustrative rather than restrictive sense. Indeed, it will be appreciated that the systems and methods of the disclosure each have several innovative aspects, no single one of which is solely responsible or required for the desirable attributes disclosed herein. The various features and processes described above may be used independently of one another, or may be combined in various ways. All possible combinations and subcombinations are intended to fall within the scope of this disclosure. Certain features that are described in this specification in the context of separate embodiments also may be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment also may be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination may in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination. No single feature or group of features is necessary or indispensable to each and every embodiment. It will be appreciated that conditional language used herein, such as, among others, “can,” “could,” “might,” “may,” “e.g.,” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without author input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment. The terms “comprising,” “including,” “having,” and the like are synonymous and are used inclusively, in an open-ended fashion, and do not exclude additional elements, features, acts, operations, and so forth. Also, the term “or” is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term “or” means one, some, or all of the elements in the list. In addition, the articles “a,” “an,” and “the” as used in this application and the appended claims are to be construed to mean “one or more” or “at least one” unless specified otherwise. Similarly, while operations may be depicted in the drawings in a particular order, it is to be recognized that such operations need not be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Further, the drawings may schematically depict one more example processes in the form of a flowchart. However, other operations that are not depicted may be incorporated in the example methods and processes that are schematically illustrated. For example, one or more additional operations may be performed before, after, simultaneously, or between any of the illustrated operations. Additionally, the operations may be rearranged or reordered in other embodiments. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems may generally be integrated together in a single software product or packaged into multiple software products. Additionally, other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims may be performed in a different order and still achieve desirable results. Accordingly, the embodiments are not intended to be limited to the embodiments shown herein, but are to be accorded the widest scope consistent with this disclosure, the principles and the novel features disclosed herein.
132,435
11860270
DETAILED DESCRIPTION System Model Frequency-modulated continuous-wave (FMCW) waveforms are popular in automotive radar as they enable high resolution target range and velocity estimation while requiring low cost samplers at the receive antennas. An FMCW waveform is a chirp signal and is transmitted periodically with a certain repetition interval. The target echo is mixed with the transmitted chirp, which results in a complex sinusoid, known as beat signal. The frequency of the beat signal is the sum of the range frequency and the Doppler frequency, each containing information about the target range and Doppler. Estimation of the beat frequency is implemented in the digital domain with two fast Fourier transforms (FFTs), i.e., a range FFT (taken on samples obtained within the waveform repetition interval) followed by a Doppler FFT (taken on samples across repetition intervals), after sampling the beat signal with a low-speed analog-to-digital converter (ADC) (hence it's low cost). The targets are first separated in range and Doppler domains. As a result, the number of targets in the same range Doppler bin is typically small, which enables angle finding with sparse sensing techniques, such as compressive sensing. A typical automotive radar transceiver, such as the AWR1243 of Texas Instruments, has Mt=3 and Mr=4 antennas. Depending on performance requirement and cost, automotive radar can use one or multiple transceivers to synthesize an SLA for angle finding. FIG.1an illustration of a graph105of the real array configuration of an automotive radar which is a cascaded of 2 transceivers, where all transmit and receive antennas are clock synchronized. Let λ denote the wavelength of carrier frequency. In this example, Mt=6 transmit antennas are deployed with uniform spacing of 10λ, while Mr=8 receive antennas are randomly deployed on discretized grid points in an interval of length equal to 10λ. The interval is discretized uniformly with spacing of half wavelength. The transmit antennas transmit FMCW waveform in a way that at each receive antenna the contribution of each transmit antenna can be separated. The latter can be achieved using time domain multiplexing (TDM), or Doppler domain multiplexing (DDM), which effectively introduce waveform orthogonality among the transmitted waveforms. Therefore, with MIMO radar technology, a virtual SLA with 48 array elements and aperture of 57λ has been synthesized, as shown in the graph110ofFIG.1. Compared to a ULA with half wavelength spacing and the same aperture, some elements at certain locations of the above virtual SLA are “missing” (denoted by zero values in the graph110ofFIG.1). However, the SLA approach uses a reduced number of transmit and receive antennas, which saves hardware cost. In addition, SLA helps in reducing the mutual coupling between antenna elements, and thus improves the array calibration performance. The array response at a particular time instance consisting of data obtained at all the MtMrvirtual receivers and corresponding to the same range-Doppler bin is defined as the array snapshot. The SNR in the array snapshot is much higher than that in the beat signal, since energy has been accumulated in both range and Doppler domains via the two FFTs. For example, a range FFT of length NRcombined with a Doppler FFT of length NDcan provide total 10 log10(NRND) dB SNR improvement. The high SNR in the array snapshot reduces the DOA estimation error. A Novel Sparse Linear Array Approach Suppose an array snapshot contains K targets with direction of arrivals (DOAs) θk, k=1, . . . , K. Without noise, the SLA response can be written as: yS=ASs(1) where AS=[aS(θ1), . . . , aS(θK)] is the steering matrix with aS(θk)=[1,ej⁢2⁢πλ⁢d⁢sin(θk),…,ej⁢2⁢πλ⁢d⁢Mt⁢Mr-1⁢sin(θk)]T and diis the spacing of the −ith element of the SLA to its reference element. Here, s=[β1, . . . , βK]T, where βkdenotes the amplitude associated with the k-th target. The corresponding virtual ULA with M=MtMrarray elements and element spacing d=λ/2 has array response: y=As(2) where A=[a(θ1), . . . , a(θK)] is the array steering matrix with: a⁡(θk)=[1,ej⁢2⁢πλ⁢d⁢sin(θk),…,ej⁢2⁢πλ⁢d⁡(M-1)⁢d⁢sin(θk)]T When M=2N−1, yϵ2N−1can be divided into N overlapped subarrays of length N. Based on those subarrays, a square Hankel matrix YϵN×Nwith Yij=yi+j−1for i=1, . . . , N and j=1, . . . , N (our approach works in non-square case) may be formulated. The Hankel matrix Y has a Vandermonde factorization, shown below: Y=BΣBT(3) where B=[b(θ1), . . . , b(θk)] is the subarray steering matrix with b⁡(θk)=[1,ej⁢2⁢πλ⁢d⁢⁢sin(θk),…,ej⁢2⁢πλ⁢d⁡(N-1)⁢sin(θk)]T and Σ=diag(τ1, . . . , βk) is a diagonal matrix. Thus, the rank of Hankel matrix Y is K if N>K. The Hankel matrix corresponding to an SLA configuration can be viewed as a subsampled version of Y. However, under certain conditions, the missing elements can be fully recovered by solving a relaxed nuclear norm optimization problem conditioned on the observed entries: min∥X∥*s·t·Xij=Yij,(i,j)ϵΩ  (4) where Ω is the set of indices of observed entries that is determined by the SLA. Once the matrix Y is recovered, the full array response is obtained by averaging its anti-diagonal entries. DOAs can be estimated via standard array processing methods based on the array response corresponding to the completed Y. The conditions of matrix completion are related to the bounds on the coherence of Y, and also the placement of the sampling entries. Coherence Properties of Hankel Matrix Let U and V be left and right subspaces of the singular value decomposition (SVD) of YϵCN×N, which has rank K. The coherence of U (similarly for V) equals: μ⁡(U)=NKmax1≤i≤NU⁡(i,:)2∈[1,NK],(5) The matrix Y has coherence with parameters μ0and μ1if (A0) max(μ(U), μ(V))≤μ0for some positive μ0. (A1) The maximum element of matrix Σ1≤i≤kuiviHis bounded by μi⁢KN in absolute value for some positive μ1. It was shown that if entries of matrix Y are observed uniformly at random, and there are constants C,c such that if |Ω|≥C max(μ12,μ01/2μ1,μ0N1/4)ηKN log N for some η>2, the minimizer to problem (4) is unique and equal to Y with probability of 1−cN−η. Therefore, if matrix Y has a low coherence parameter, it can be completed using a smaller number of observed entries. The following Theorem relates the coherence of Hankel matrix Y to the relative location of targets to each other, number of targets and N. Theorem 1: (Coherence of Hankel Matrix Y): Consider the Hankel matrix Y constructed from a uniform linear array as presented in Section III and assume the set of target angles {θk}kϵϵNK+consists of almost surely distinct members, with minimal spatial frequency separation K≤NβN(ξ) satisfying |X|≥ξ≠0. If x=min(i,j)∈ℕK+×NK+,i≠jdλ⁢sin⁡(θi-sin⁢θj) where βN(ξ)=1N⁢sin2⁢(π⁢N⁢ξ)sin2⁢(π⁢ξ) is the Fejer kernel, the matrix Y satisfies the conditions (A0) and (A1) with coherence μ0=ΔNN-(K-1)⁢βN(ξ) and μ0≙μ0√{square root over (K)} with probability 1. The Fejer kernel βN(x) is a periodic function of x. For d=λ/2, the spatial frequency separation satisfies |x|ϵ(0,1/2]. If 0<ξ<1/N, it holds that βN(ξ)=0⁢(1N). Increasing the number of sub-array elements N will decrease the matrix coherence μ0. In the limit w.r.t. N, it holds that limN→∞μ0=1, which is its smallest possible value. Identifiability of Full Array Via Matrix Completion In this section is discussed SLA topologies that can guarantee unique completion of the low-rank Hankel matrix Y. Consider the two SLA configurations (i.e., the graphs205and210) shown inFIG.2A. Both SLAs have the same number of array elements and the same aperture size of 3λ. The second SLA (i.e., the graph210) is a ULA with element spacing of d=λ. Assume that there is one target at angle θ. Let γ=Δej⁢2⁢π⁢dλ⁢sin(θk). The normalized array snapshot of a ULA with aperture size of 3λ is y=[1, γ,γ2, γ3, γ4, γ5, γ6]T. The array snapshots of the two SLAs are y1=[1, γ,*, γ3, *, *, γ6]Tand y2=[1, *, γ2, *, γ4, *, γ6]T, where * denotes the missing elements. Under the above two different SLAs. The Hankel matrices with missing elements are: Y1=[1γ*γ3γ*γ3**γ3**γ3**γ5],Y2=[1*γ2**γ2*γ4γ2*γ4**γ4*γ6] Matrix Y is rank one and it can be reconstructed from Y1uniquely. However, there would be infinite completions of Y from Y2. In a ULA with element spacing d=λ, there is angle ambiguity which cannot be mitigated via the matrix completion approach. Let G=(V,E) be a bipartite graph associated with the sampling operator PΩ, where V={1, 2, . . . , N}∪{1, 2, . . . , N} and (i,j)ϵE iff(i,j)ϵΩ. Let GϵN×Nbe the biadjacency matrix of the bipartite graph G with Gij=1 iff (i,j)ϵΩ. Note that PΩ(Y)=Y⊙G, where ⊙ denotes the Hadamard product. The two bipartite graphs, G1and G2associated with the two SLAs are shown inFIG.2, respectively. It can be seen that G1is connected, while G2is not. For a unique reconstruction of Y, the graph must be connected. The recoverability of a low rank matrix can also be characterized by the spectral gap of graph G, which is defined as the difference between the first two largest singular values of G. If the spectral gap of matrix G is sufficient large, the nuclear norm minimization method defined in (4) exactly recovers the low-rank matrix satisfying the conditions (A1) and (A2). It can be verified that G2is a 2-regular graph with σ1(G)=σ2(G)=2. Thus, the spectral gap of G2is zero and Y cannot be recovered from Y2. Let GK+1,K+1−1denote the complete bipartite graph with (K+1)×(K+1) vertices minus one edge. The graph G is called a K-closed bipartite graph, if G does not contain a vertex set whose induced subgraph is isomorphic to GK+1,K+1−1. In general, a rank-K matrix can be uniquely completed only if the bipartite graph G associated with the sampling is K-closable. If Ω is generated from a d-regular graph G with sufficient large spectra gap, and d≥36C2μ02K−2, then the nuclear norm optimization of (4) exactly recover the low-rank matrix, where Cis a constant. It can be seen that if the coherence of Y, i.e. μ0defined in Theorem 1 is low, the required number of observation samples or array elements of SLA is less. Numerical Results Consider an automotive radar setup ofFIG.1with FMCW transmit waveforms of bandwidth B=350 MHz, corresponding to range resolution of ΔR=0.43 meters. For one coherent processing, a total of 512 FMCW chirps are transmitted, with chirp duration of T=28 μs. Consider two stationary targets at range of 35 meter with DOA of θ1=10° and θ2=20°. The SNR of the beat signal is set as 0 dB. To estimate the range and Doppler of the targets, a range FFT of length 256 and a Doppler FFT of length 512 are implemented on the sampled beat signal for all 48 channels. The 2 FFT operations (range FFT followed by Doppler FFT) not only help separate targets in the range-Doppler domains, but also provide an SNR improvement in the array response of around 51 dB corresponding to the same range-Doppler bin. The SLA shown inFIG.1acts as a deterministic sampler of a rank-2 Hankel matrix YϵN×Nwith N=60, which is constructed based on the array response of a ULA with 119 elements. The array response of the SLA is normalized by its first element. Based on the observed SLA response, the Hankel matrix Y is completed via the singular value thresholding (SVT) algorithm. Let Ŷ denote the completed Hankel matrix. The full ULA response can be reconstructed by taking the average of the anti-diagonal elements of matrix?. The completed full array has aperture size of 59λ. Intuitively, in this simulation setting, matrix completion contributes around 10 log 10(119/48)=3.94 dB SNR improvement for array processing. InFIG.3, the range angle spectrum for the two stationary targets is plotted as the graph305and the graph310. The two azimuth angle spectra are obtained by applying an FFT to the original SLA with the holes filled with zeros, and the full array completed via matrix completion, respectively. It can be found that it is difficult to detect the two targets in azimuth directions under the original SLA due to its high grating lobes. On the contrary, there are two clear peaks corresponding to correct range and azimuth locations in the range angle spectrum of the completed full array. The comparison of SLA and the completed full array via FFT and MUSIC is shown inFIG.4. With spatial smoothing, the completed full array is divided into overlapped subarrays of length N=60 and a covariance matrix RϵN×Nis formulated. The MUSIC algorithm is then applied to R. It can be found that FFT of SLA generates two peaks corresponding to the correct azimuth directions at a cost of high grating lobes, which are suppressed under the completed full array. The MUSIC pseudo spectrum based on the completed full array response yields sharp peaks corresponding to the correct azimuth directions. EXAMPLE EMBODIMENTS FIG.5is an illustration of of an example environment500for using measurements generated from a sparce linear array. As shown, the environment500includes a vehicle501equipped with a MIMO radar505. The MIMO radar505may have some number of receive and transmit antennas. The MIMO Radar505may be used by the vehicle501to perform one or more functions such as navigation or collision avoidance. The MIMO radar505may use the transmit antennas to generate a frequency modulated continuous wave (“FMCW”) waveform501. The waveform501may be directed away from the vehicle501and may be used to determine the distance and angle (e.g., target angle590) of a target540with respect to the vehicle501. The distance and target angle590may be used by the vehicle501to either avoid the target540or to alert a driver of the vehicle501to the presence of the target540. To improve the performance of the MIMO radar505, in some embodiments, the vehicle501may include a radar engine550. The radar engine550may be implemented using a general purpose computing device such as the computing device700illustrated with respect toFIG.7. The radar engine550may initially receive what is referred to herein as a sparse array560. The sparse array560may include a value for each of the receive antennas of the MIMO radar505. In some embodiments, each value may be a signal strength for a portion of the FMCW waveform501that was received by the corresponding antennae of the MIMO radar505. As described above, the values of the sparse array560may be used by the radar engine550to generate a virtual array570. The virtual array570may have more values than the sparse array560and may include values from the sparse array560that correspond to the actual or real antennas of the MIMO radar505, as well as values for virtual antennas that are not physically part of the MIMO radar505. The values of the virtual array570that correspond to the virtual antennas are referred to as the missing elements580. As may be appreciated, by using virtual antenna in addition to the real or actual antenna, the performance of the MIMI radar505may be increased without realizing the cost and size increase associated with increasing the number of antennae of the MIMO radar505. In some embodiments, the radar engine550may calculate the values using matrix completion on the sparse array560. More specifically, the radar engine550may calculate the missing values by completing a Hankel matrix as described previously. A Hankel matrix is a matrix having constant values along its antidiagonals. After completing the missing elements580of the virtual array570, the radar engine550may use the virtual array570to provide navigation services to the vehicle501. For example, the radar engine550may use the values in the virtual array570to calculate a target angle590between the vehicle501and the target540. The vehicle501may then use the target angle590to avoid hitting the target540(e.g., steer around the target540and/or apply the brakes) or may alert a driver of the vehicle501about the target540. FIG.6is an illustration of an example method600for completing measurements for a uniform linear array from measurements from a sparse linear array. The method600may be completed by the radar engine550ofFIG.5. At601, a first set of measurement for a sparse linear array are received. The measurements in the first set of measurements may be RADAR measurements received from a MIMO RADAR. Each measurement may correspond to an antennae of the sparse linear array. The sparse linear array may have fewer antennas than a corresponding virtual uniform linear array. At603, a second set of measurements is generated. The second set of measurements may be generated from the first set of measurements by the computing device700. The second set of measurements may be for the virtual linear array. Because the virtual uniform linear array has more antennas than the sparse linear array, the second set of measurements may have a plurality of missing elements. At605, matrix completion is used to determine values for the plurality of missing elements. The matrix completion may be performed by the computing device600. Depending on the embodiment, the matrix completion may include completing a Hankel matrix using the first set of measurements. Other methods for matrix completion may be used. At607, the second set of measurements are used to estimate a target angle. The target angle may be estimated by the computing device600using the second set of measurements. The second set of measurements may have been completed using matrix completion. FIG.7shows an exemplary computing environment in which example embodiments and aspects may be implemented. The computing device environment is only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality. Numerous other general purpose or special purpose computing devices environments or configurations may be used. Examples of well-known computing devices, environments, and/or configurations that may be suitable for use include, but are not limited to, personal computers, server computers, handheld or laptop devices, multiprocessor systems, microprocessor-based systems, network personal computers (PCs), minicomputers, mainframe computers, embedded systems, distributed computing environments that include any of the above systems or devices, and the like. Computer-executable instructions, such as program modules, being executed by a computer may be used. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Distributed computing environments may be used where tasks are performed by remote processing devices that are linked through a communications network or other data transmission medium. In a distributed computing environment, program modules and other data may be located in both local and remote computer storage media including memory storage devices. With reference toFIG.7, an exemplary system for implementing aspects described herein includes a computing device, such as computing device700. In its most basic configuration, computing device700typically includes at least one processing unit702and memory704. Depending on the exact configuration and type of computing device, memory704may be volatile (such as random access memory (RAM)), non-volatile (such as read-only memory (ROM), flash memory, etc.), or some combination of the two. This most basic configuration is illustrated inFIG.7by dashed line706. Computing device700may have additional features/functionality. For example, computing device700may include additional storage (removable and/or non-removable) including, but not limited to, magnetic or optical disks or tape. Such additional storage is illustrated inFIG.7by removable storage708and non-removable storage710. Computing device700typically includes a variety of computer readable media. Computer readable media can be any available media that can be accessed by the device700and includes both volatile and non-volatile media, removable and non-removable media. Computer storage media include volatile and non-volatile, and removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Memory704, removable storage708, and non-removable storage710are all examples of computer storage media. Computer storage media include, but are not limited to, RAM, ROM, electrically erasable program read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computing device700. Any such computer storage media may be part of computing device700. Computing device700may contain communication connection(s)712that allow the device to communicate with other devices. Computing device700may also have input device(s)714such as a keyboard, mouse, pen, voice input device, touch input device, etc. Output device(s)716such as a display, speakers, printer, etc. may also be included. All these devices are well known in the art and need not be discussed at length here. It should be understood that the various techniques described herein may be implemented in connection with hardware components or software components or, where appropriate, with a combination of both. Illustrative types of hardware components that can be used include Field-programmable Gate Arrays (FPGAs), Application-specific Integrated Circuits (ASICs), Application-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), etc. The methods and apparatus of the presently disclosed subject matter, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium where, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the presently disclosed subject matter. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
22,677
11860271
Reference signs:1, dual-polarization micro-strip radiation unit;2, spherical support;3, antenna seat;4, digital transceiver module;5, signal processing module;6, power supply module;7, dual-channel digital T/R assembly;8, first digital transmission network;9, second transmission network;10, multi-channel digital beam forming unit;11, signal processing unit;12, system control unit;13, communication unit;14, effective aperture;15, beam. DETAILED DESCRIPTION OF THE EMBODIMENTS The following clearly and completely describes the technical scheme in the embodiments of the present disclosure with reference to the attached figures in the embodiments of the present disclosure. Apparently, the described embodiments are merely a part rather than all of the embodiments of the present disclosure. All other embodiments obtained by those skilled in the art based on the embodiments of the present disclosure without creative efforts shall fall within the protection scope of the present disclosure. The present disclosure aims to provide spherical dual-polarization phased array weather radar to solve the problem of low accuracy of the radar for detecting meteorological targets in the prior art. To make the foregoing objective, features and advantages of the present disclosure clearer and more comprehensible, the present disclosure is further described in detail below with reference to the attached figures and specific embodiments. In order to solve the problems in the prior art, the present disclosure provides the spherical dual-polarization phased array weather radar, and realizes two-dimensional electric scanning of beams in a full airspace without changed scanning of performance, namely, the beam width, the antenna gain, the dual-polarization channel performance and the radar irradiation volume are stable and unchanged during beam scanning of the spherical dual-polarization phased array radar. The mutual coupling relation between all the dual-polarization micro-strip radiation units1and transceiver channels is stable and unchanged, and the accuracy of weather detection and target identification is effectively improved. FIG.1is a system principle diagram of the spherical dual-polarization phased array weather radar in the present disclosure; and as shown inFIG.1, the spherical dual-polarization phased array weather radar mainly comprises an antenna seat3, a spherical crown phased array antenna module, a digital transceiver module4, a signal processing module5and a power supply module6. The array surface of the spherical dual-polarization phased array weather radar adopts spherical conformal phased array design, and all the dual-polarization micro-strip radiation units1on the array surface are arranged at equal included angles with the sphere center. The spherical crown phased array antenna module comprises a spherical support frame2and a plurality of dual-polarization micro-strip radiation units1; the dual-polarized micro-strip radiation units1are arranged on the spherical support frame2, and are tightly arranged on the spherical support frame2; the positions of the dual-polarized micro-strip radiation units1are adjusted, so that the directions of beams15emitted by the dual-polarized micro-strip radiation units1are consistent with the normal directions of effective apertures14of the spherical support frame2; and the spherical crown phased array antenna module is used for detecting weather. Wireless transmission is carried out between the digital transceiver module4and the spherical crown phased array antenna module; the digital transceiver module4is used for generating a frequency modulation signal or a phase coding signal required for detecting the meteorological targets and receiving an echo signal reflected by the target; the digital transceiver module4comprises a plurality of dual-channel digital T/R assemblies7, each dual-polarization micro-strip radiation unit corresponds to one of the dual-channel digital T/R assemblies; the dual-channel digital T/R assemblies7are used for receiving and transmitting signals, and the signals comprise the frequency modulation signal, the phase coding signal or the echo signal. The dual-channel digital T/R assembly7comprises a digital receiving unit and a digital transmitting unit; the digital transmitting module is used for generating the frequency modulation signal or the phase coding signal required for detecting the meteorological target; and the digital receiving unit is used for receiving the echo signal reflected by the target. The digital transmitting unit is an all-solid-state transmitter, and the all-solid-state transmitter is controlled by the system control unit12(namely a radar control module) in the signal processing module to be in a transmitting state or a transmitting stopping state. The spherical dual-polarization phased array weather radar also comprises digital transmission networks; the digital transmission networks comprise a first digital transmission network8and a second digital transmission network9; the digital transmission networks are used for information interaction between the digital transceiver module4and the signal processing module5. The first digital transmission network8is arranged in the digital transceiver module4, and one end of the first digital transmission network8is connected with the dual-channel digital T/R assemblies7; and the second digital transmission network9is arranged in the signal processing module5, and one end of the second digital transmission network9is connected with the other end of the first digital transmission network8. The signal processing module5is connected with the digital transceiver module4through the digital transmission networks, receives downlink data of the dual-channel digital T/R assemblies7and carries out subsequent processing; the signal processing module5is used for performing spectral analysis according to the echo signal to obtain target echo information; and the system control unit12in the signal processing module5is also used for generating a control instruction to drive the spherical crown phased array antenna module to carry out signal acquisition in the horizontal direction and/or signal acquisition in the pitching direction. The spherical crown phased array antenna module also comprises a plurality of electronic switches; one electronic switch corresponds to one of the dual-polarization micro-strip radiation units1; the signal processing module5is connected with the electronic switches in the spherical crown phased array antenna module; the system control unit12in the signal processing module5is used for controlling the electronic switch to be switched on and off according to the control instruction, and the spherical crown phased array antenna module forms electronic scanning at different angles in all directions according to the switching-on area of the electronic switches. The signal processing module5comprises a multi-channel digital beam forming unit10, a signal processing unit11, a system control unit12and a communication unit13; and the multi-channel digital beam forming unit10, the signal processing unit11, the system control unit12and the communication unit13are all connected with the other end of the second digital transmission network9. The multi-channel digital beam forming unit10is connected with the signal processing unit11; the signal processing unit11is connected with the communication unit13; the multi-channel digital beam forming unit10is used for converting the echo signal received by the spherical crown phased array antenna module into a beam signal; the signal processing unit11is used for performing spectral analysis on the beam signal to obtain target echo information; and the communication unit13is used for sending the target echo information. Moreover, the spherical dual-polarization phased array weather radar also comprises a power supply module6; the power supply module6is connected with the spherical crown phased array antenna module, the digital transceiver module4and the signal processing module5; and the power supply module6is used for supplying power to the spherical dual-polarization phased array weather radar. FIG.3is a beam schematic diagram of the spherical dual-polarization phased array weather radar in the present disclosure;FIG.4is an effective aperture scanning schematic diagram of the spherical dual-polarization phased array weather radar in the present disclosure; and as shown inFIG.3andFIG.4, in the electric scanning process of the spherical dual-polarization phased array weather radar, main beams always point to the normal directions of cambered surfaces of the effective apertures14, the directions of the beams15emitted by the dual-polarized micro-strip radiation units1are adjusted to be consistent with the normal directions of the effective apertures14of the spherical support frame2, so that the beams15emitted by all the dual-polarized micro-strip radiation units1always point to the normal directions of the effective apertures14of the spherical support frame2, and azimuth-pitching two-dimensional beam electric scanning is realized; and therefore, the widths of the beams and the gain are not changed. The selection of the working units is realized through a control terminal. The control terminal calculates a vertical chord plane and the effective apertures14according to beam pointing (azimuth and pitching) angles by sending a thunder control message instruction, and selects the normal operation of transceiver active channels in the effective apertures14, and channels in non-effective apertures are all closed to form scanning beams with corresponding pointing. As shown inFIG.4, R1to R10represent a beam1to a beam10received by the dual-polarization micro-strip radiation units1, and EZ represents a pitching angle. The spherical dual-polarization phased array weather radar firstly receives an external instruction or sets working parameters according to built-in parameters, the signal processing module5generates a timing signal required by the working of the whole machine, and the system enters working modes. Under the action of modulating pulse, the signal is generated to form the frequency modulation signal or the phase coding signal required for detecting the meteorological target, an excitation signal is sent to the digital transceiver module4after the frequency is converted to a system working wave band in an uplink mode, and amplified power is radiated by the dual-polarization micro-strip radiation units1. After encountering the meteorological target, electromagnetic waves generate backscattering return, the dual-polarization micro-strip radiation units1receive the echo signal, the echo signal enters a receiving link of the digital transceiver module4, and the echo signal is transmitted into the multi-channel digital beam forming unit10after being subjected to low-noise amplification, filter and down-conversion to digital intermediate frequency and analog-to-digital converter sampling. A beam receiving signal is formed in the multi-channel digital beam forming unit10, and then spectral analysis is carried out through the signal processing unit11to obtain target echo information. The echo information is transmitted to the control terminal through the communication unit13. The detection distance of the spherical dual-polarization phased array weather radar is larger than or equal to 400 km, generation, development, dissipation and movement states of a strong convection dangerous weather system within the range of 400 km around a radar station can be detected and obtained, effective monitoring and early warning are carried out on disastrous weather such as meso-scale storm, rainstorm, wind shear, hail and tornado within the range of 200 km, and timely and accurate meteorological detection data is provided for the meteorological support of a user. In one specific embodiment,FIG.5is an antenna scanning lobe pattern of the spherical dual-polarization phased array weather radar in the present disclosure, as shown inFIG.5, the beam width is designed to be 1°, and the radar is selected to operate in the C-band (5400 MHz to 5600 MHz), so that the sphere diameter D is equal to 4.5 m, and weighting of a digital beam maximum side lobe SLL being smaller than or equal to 30 dB is received. When the number of units on the spherical equator Nmax is equal to 380, grating lobes do not appear at 5400 MHz to 5600 MHz. In order to guarantee the coverage of the airspace with the azimuth ranging from 0 to 360° and the pitching ranging from −2° to +182°, n (41938) dual-polarized micro-strip radiation units1are totally arranged in the whole array. According to the design of the spherical phased array dual-polarization weather radar, a pulse compression system is adopted, the distance resolution and the action distance are considered, the signal waveform is in a linear frequency modulation pulse mode, a nonlinear frequency modulation pulse mode and a phase coding pulse mode, the maximum signal bandwidth is 15 MHz, and the distance resolution is superior to 10 m. The pulse width is adjustable in a range from 0.5 microseconds to 200 microseconds. After pulse compression, the distance side lobe is reduced through frequency domain weighting processing in signal processing. During dual-polarization work, the transmitted waveforms of horizontal polarization and vertical polarization can be the same, and an orthogonal coding form can also be adopted, so that the isolation between two channels during dual-polarization work is improved. The working modes of the spherical phased array dual-polarization weather radar are divided into a Doppler working mode and a polarization working mode. In the Doppler working mode, the system carries out speed, spectral width and intensity processing on echoes and estimates relevant information of the weather target from the echoes. The polarization working mode is mainly used for processing parameters such as ZDR, Phi DP, LDR and Rho OHV except for processing conventional Doppler Z/V/W. In order to guarantee the maximum detection distance of the system, a linear frequency modulation pulse waveform is adopted according to the working principle of weather radar and the requirements of system detection, and the requirements of system detection power are met by utilizing a pulse compression technology. FIG.2is a transmitted waveform schematic diagram of the spherical dual-polarization phased array weather radar in the present disclosure, as shown inFIG.2, when the system works, the pulse width is 20 microseconds to 160 microseconds, the corresponding signal bandwidth is 2.5 MHz and 5 MHz, and the pulse width is compressed to 0.3 microseconds or 0.6 microseconds by using a digital pulse compression technology in signal processing. For a close-range blind area caused by transmitted wide pulse, a blind compensation pulse signal is transmitted to carry out close-range blind compensation, and the width of a blind compensation narrow pulse is 0.5-5 microseconds which can be set by software. The width of the blind compensation frequency modulation pulse is 10 microseconds or 20 microseconds which can be set by software. Radar beams are designed in a wide-transmitting and narrow-receiving multi-beam form, an intra-pulse inter-frequency narrow-transmitting and narrow-receiving multi-beam form, a wide-band narrow-transmitting and narrow-receiving single-beam form and the like. The wide-transmitting and narrow-receiving multi-beam form means that wide beams are transmitted, and then multiple beams are received through a digital beam forming technology, so that three-dimensional scanning of the airspace is realized. The transmitted signals adopt a long/narrow double-pulse or long/medium/narrow three-pulse working waveform, wherein the long/medium pulse adopts a linear frequency modulation or phase coding signal form, the narrow pulse adopts a single carrier frequency signal form, the medium pulse and the narrow pulse are used for short-range blind compensation, and finally the multi-channel digital beam forming unit10forms a receiving beam signal and sends the receiving beam signal to the signal processing unit11for processing. The spherical crown phased array antenna module transmits multiple beams using intra-pulse narrow transmitting and narrow receiving, receives echo signals separated by frequency isolation at different scanning angles, and finally multi-beam signals are received through the multi-channel digital beam forming unit10and sent to the signal processing unit11for processing. A single carrier frequency signal mode is adopted for short-range blind compensation, and short-range blind areas of different scanning angles are subjected to blind area compensation through five narrow pulses. Single-beam working is adopted for wide-band narrow transmitting and narrow receiving, the purpose of designing the working mode is designed to provide high distance resolution, the signal bandwidth is 15 MHz, and the distance resolution can reach 10 m. The transmitted signals adopt a long/narrow double-pulse or long/medium/narrow three-pulse working waveform, wherein the long/medium pulse adopts a linear frequency modulation or phase coding signal form, the narrow pulse adopts a single carrier frequency signal form, the medium pulse and the narrow pulse are used for short-range blind compensation, and finally the multi-beam signals are received through the multi-channel digital beam forming unit10and sent to the signal processing unit11for processing. All embodiments in this specification are described in a progressive manner. Each embodiment focuses on differences from other embodiments. For the part that is the same or similar between different embodiments, reference may be made between the embodiments. Several examples are used for illustration of the principles and implementation methods of the present disclosure. The description of the embodiments is used to help illustrate the method and its core principles of the present disclosure. In addition, those skilled in the art can make various modifications in terms of specific embodiments and scope of application in accordance with the teachings of the present disclosure. In conclusion, the content of this specification shall not be construed as a limitation to the present disclosure.
18,449
11860272
DETAILED DESCRIPTION FIGS.1and2represent an ultrasonic transducer element100. The ultrasonic transducer element100comprises a diaphragm120with an electrode112, and a substrate101with an electrode111. A cavity130, which allows movement of the diaphragm120, is provided between the diaphragm120and the substrate101. By applying an AC voltage between the electrodes111and112using a voltage source151, the diaphragm120can be excited in oscillation so that the ultrasonic transducer element100can emit ultrasound waves141. The ultrasonic transducer element100shown inFIGS.1and2may likewise be used to detect ultrasound waves142. For this purpose, a DC voltage may be applied between the electrodes111and112using the voltage source152. The ultrasound waves142can excite the diaphragm120in oscillation. An AC voltage is induced because of the distance which therefore varies between the electrodes111and112, and can be measured using a measuring device153. FIGS.3to6schematically represent the way in which touching of a covering390,490(e.g., a cover or lid) on the opposite side of the covering390,490from the ultrasonic touch sensor can be registered using the ultrasonic transducer element311or411, respectively. The ultrasonic transducer element311or411is respectively embedded in an encapsulation layer320,420, the encapsulation layer320,420comprising a contact surface via which the ultrasonic touch sensor is applied on the covering390,490. The ultrasonic transducer element311,411may respectively be fastened on a circuit board370,470and electrically connected thereto. As shown inFIG.3, ultrasound waves can be generated using the ultrasonic transducer element311, these being transmitted substantially fully through the interface between the encapsulation layer320and the covering390and subsequently reflected at the free surface of the covering390on the opposite side from the encapsulation layer320. After transmission back through the interface between the covering390and the encapsulation layer320, the ultrasound waves can again be detected by the sensor element311so that an echo signal as represented belowFIG.3is obtained. If the free surface of the covering390on the opposite side from the encapsulation layer320is touched, for example with a finger401, only a small proportion of the ultrasound waves will be reflected at the free surface and the echo signal will decrease, as represented belowFIG.4. FIG.5shows that a cavity491remains when the ultrasonic touch sensor is applied on the covering490. The effect of this cavity491is that the ultrasound waves emitted by the sensor element411do not pass through the interface between the encapsulation layer420and the covering490, but are reflected at this interface so that an echo signal as represented underneath is obtained. Since the ultrasound waves are not (or are almost not) transmitted into the covering, touching the covering490with the finger601does not lead to a change in the echo signal. Although a capacitive sensor element311,411has been described above, corresponding considerations also apply for a piezoelectric sensor element, in particular for ultrasonic transceivers which operate according to a piezoelectric measurement principle. FIG.7illustrates an ultrasonic touch sensor701that comprises a housing702in which a first semiconductor chip711and a second semiconductor chip712are arranged. The first semiconductor chip711and the second semiconductor chip712in this case respectively comprise an ultrasonic transducer element and are embedded in an encapsulation layer706. The ultrasonic touch sensor701is connected to a covering703using an adhesive layer704. When ultrasound waves are emitted by an ultrasonic transducer element, reflections may occur at the housing702so that not only a variable echo signal780due to touching with the finger705but also possibly parasitic ultrasound signals771,772,773,774are registered. These may interfere with the reliable registering of touching of the covering703. FIG.8illustrates a step for the production of a touch sensor. A prefabricated housing801having a recess802and having electrical terminals811,812is provided. As represented inFIG.9, a first semiconductor chip921and a second semiconductor chip922may be arranged in the recess802of the prefabricated housing801. Using bonding wires931and932, electrical contacts of the first semiconductor chip921and of the second semiconductor chip922may be connected to the electrical terminals811,812. The first semiconductor chip921may comprise a first ultrasonic transducer element and the second semiconductor chip922may comprise a second ultrasonic transducer element. The first ultrasonic transducer element and the second ultrasonic transducer element may respectively be covered with a gel1041,1042for acoustic coupling to a potting compound. The applied gel1041,1042may be subjected to a physical and/or chemical treatment so that a cured gel1141,1142is obtained, as represented inFIG.11by darker hatching. Subsequently, the first semiconductor chip921and the second semiconductor chip922may be embedded in a potting compound1205(cf.FIG.12), which may then be subjected to a physical and/or chemical treatment, in particular cured, as represented inFIG.13by the darker hatching. The potting compound1305may in this case, in particular, protect the bonding wires931,932and their fastening on the first semiconductor chip921or respectively the second semiconductor chip922and on the electrical terminals811,812from mechanical stress. Via the free surface of the potting compound1305, the ultrasonic transducer element may later be applied onto a covering. As represented inFIG.14, a recess1406is then introduced into the potting compound1305in order to produce an acoustic barrier between the first semiconductor chip and the second semiconductor chip. This may, for example, be carried out using laser ablation. FIG.15shows an ultrasonic touch sensor1500having a contact face for applying the ultrasonic touch sensor1500onto a covering1592, having a first ultrasonic transducer element, having a first semiconductor chip921, the first semiconductor chip921comprising the first ultrasonic transducer element, and having a second ultrasonic transducer element, wherein an acoustic barrier1406is formed between the first ultrasonic transducer element and the second ultrasonic transducer element. The second ultrasonic transducer element is in this case arranged laterally with respect to the first ultrasonic transducer element. In the ultrasonic touch sensor1500inFIG.15, the acoustic barrier1406is formed as a cavity, in particular as an air gap. It is, however, also conceivable for the acoustic barrier to comprise an absorption material. In particular, polymers comprising tungsten may be used as an absorption material. It is likewise conceivable to provide the recess1406with a sound-absorbing wall structure in order to produce the acoustic barrier. In order to fasten the ultrasonic touch sensor1500, the covering1592may comprise an adhesive layer1591. Using the acoustic barrier, it is possible to reduce the risk that parasitic ultrasound sources1571,1572, which are undesired but often difficult to avoid, and which occur during the emission of ultrasound waves1581by the first ultrasonic transducer element in the direction of the covering1592, will reach the second ultrasonic transducer element. In particular, crosstalk may be avoided. The reliability of the detection of the touching of the covering1592with a finger1505may therefore be increased. FIG.16represents a further ultrasonic touch sensor1600. It corresponds substantially to the ultrasonic touch sensor1500which is represented inFIG.15, so that for the description of the features provided with the reference numerals1611,1631,1621,1641,1681,1605,1601,1606,1691,1682,1642,1622,1692,1632,1612, reference is made to the description of the corresponding features811,931,921,1141,1581,1305,801,1406,1591,1582,1142,922,1592,932,812. In addition to the first semiconductor chip1621and the second semiconductor chip1622, the ultrasonic touch sensor1600also comprises a third semiconductor chip1623. The third semiconductor chip1623may, in particular, comprise an integrated circuit for generating the control signals for a transmitting ultrasonic transducer element and/or for evaluating the reception signal for a receiving ultrasonic transducer element. The use of a third semiconductor chip1623may make it possible to manufacture the third semiconductor chip1623with process techniques that differ from the process techniques which are needed for the production of the ultrasonic transducer elements. The third semiconductor chip1623may be provided as a semiconductor chip1623partially or fully embedded in the prefabricated housing. The third semiconductor chip1623may also be arranged laterally with respect to or even below the first semiconductor chip1612. A further ultrasonic touch sensor1700is depicted inFIG.17. In contrast to the ultrasonic touch sensors1500and1600, in the ultrasonic touch sensor1700a plurality of ultrasonic transducer elements are arranged in a single semiconductor chip1721. A plurality of acoustic barriers1761,1762,1763,1764,1765,1766,1767are provided between the ultrasonic transducer elements. The acoustic barriers1761,1762,1763,1764,1766,1767are formed as recesses which extend not only through the potting compound1705but also through the gel1741that covers the ultrasonic transducer elements. The plurality of ultrasonic transducer elements separated by the acoustic barriers1761,1762,1763,1764,1766,1767can make it possible to determine not only touching, but also a position of the finger1505. The ultrasonic touch sensor1700may consequently also be regarded as a position sensor. FIG.18illustrates steps for the production of an ultrasonic touch sensor. In step1801, a first semiconductor chip is provided, the first semiconductor chip comprising a first ultrasonic transducer element. In step1802, a second ultrasonic transducer element is provided. In step1803, the first semiconductor chip is embedded in a potting compound. In step1804, a recess is introduced into the potting compound in order to produce an acoustic barrier between the first ultrasonic transducer element and the second ultrasonic transducer element. ASPECTS Some aspect implementations will be defined by the following aspects: Aspect 1. An ultrasonic touch sensor (1500)having a contact face for applying the ultrasonic touch sensor (1500) onto a covering (1592),having a first ultrasonic transducer element,having a first semiconductor chip (921), the first semiconductor chip (921) comprising the first ultrasonic transducer element,having a second ultrasonic transducer element,wherein an acoustic barrier is formed between the first ultrasonic transducer element and the second ultrasonic transducer element. Aspect 2. The ultrasonic touch sensor (1500) as according to Aspect 1,wherein the second ultrasonic transducer element is arranged laterally with respect to the first ultrasonic transducer element. Aspect 3. The ultrasonic touch sensor (1500) as according to one of Aspects 1 or 2,wherein the acoustic barrier is formed as a cavity, in particular as an air gap. Aspect 4. The ultrasonic touch sensor (1700) as according to one of Aspects 1 to 3,having a second semiconductor chip (922),wherein the second semiconductor chip (922) comprises the second ultrasonic transducer element. Aspect 5. The ultrasonic touch sensor (1500) as according to one of Aspects 1 to 3,wherein the first semiconductor chip comprises the second ultrasonic transducer element. Aspect 6. The ultrasonic touch sensor (1500) as according to one of the preceding aspects,wherein the first ultrasonic transducer element and/or the second ultrasonic transducer element is covered with a gel for acoustic coupling to a potting compound. Aspect 7. The ultrasonic touch sensor (1500) as according to one of the preceding aspects,wherein the first semiconductor chip (921) and/or the second semiconductor chip (922) is embedded in a or the potting compound (1305). Aspect 8. The ultrasonic touch sensor (1500) as according to Aspect 7,wherein the barrier has an acoustic impedance which differs from the acoustic impedance of the potting compound. Aspect 9. The ultrasonic touch sensor (1500) as according to one of Aspects 5 to 8,wherein the first semiconductor chip comprises a multiplicity of ultrasonic transducer elements, which are separated from one another by acoustic barriers. Aspect 10. The ultrasonic touch sensor (1500) as according to one of the preceding aspects,wherein the ultrasonic touch sensor (1500) is a position sensor. Aspect 11. A method for producing an ultrasonic touch sensor (1500), in particular an ultrasonic touch sensor (1500) as according to one of Aspects 1 to 10,wherein a first semiconductor chip (921) is provided,wherein the first semiconductor chip (921) comprises a first ultrasonic transducer element,wherein a second ultrasonic transducer element is provided,wherein the first semiconductor chip (921) is embedded in a potting compound (1205),wherein a recess (1406) is introduced into the potting compound (1305), in particular using laser ablation, in order to produce an acoustic barrier between the first ultrasonic transducer element and the second ultrasonic transducer element. Aspect 12. The method for producing an ultrasonic touch sensor (1500) as according to Aspect 11,wherein a prefabricated housing (801) is provided,wherein a first semiconductor chip (921) is arranged in a recess (802) of the prefabricated housing (801). Aspect 13. The method for producing an ultrasonic touch sensor (1500) as according to one of Aspects 11 or 12,wherein the first ultrasonic transducer element and/or the second ultrasonic transducer element is covered with a gel (1041) for acoustic coupling to the potting compound (1305). Aspect 14. The method for producing an ultrasonic touch sensor (1500) as according to Aspect 13,wherein the gel (1141) is cured. Aspect 15. The method for producing an ultrasonic touch sensor (1500) as according to one of Aspects 11 to 14,wherein the potting compound (1305) is cured. Aspect 16. The method for producing an ultrasonic touch sensor (1500) as according to one of Aspects 11 to 15,the recess (1406) is filled with an absorption material in order to produce the acoustic barrier. Although specific aspect implementations have been illustrated and described in this description, persons with normal technical knowledge will realize that many alternative and/or equivalent implementations may be selected in place of the specific aspect implementations which are presented and described in the description, without departing from the scope of the implementation as presented. The intention is for this application to cover all adaptations or variations of the specific aspect implementations that are discussed herein. It is therefore intended for this implementation to be limited only by the claims and the equivalents of the claims.
15,106
11860273
DETAILED DESCRIPTION Acoustic imaging can be performed by emitting an acoustic waveform (e.g., pulse) within a physical elastic medium, such as a biological medium, including tissue. The acoustic waveform is transmitted from a transducer element (e.g., of an array of transducer elements) toward a target volume of interest (VOI). In conventional real aperture ultrasound imaging systems, the quality of images directly depends on the acoustic field generated by the transducer of the ultrasound system, and the image is typically acquired sequentially, one axial image line at a time (i.e., scan of the target area range slice by slice). This sets limits on the frame rate during imaging that may be detrimental in a variety of real-time ultrasound imaging applications, e.g., including the imaging of moving targets. To address limitations with conventional real aperture ultrasound imaging, synthetic aperture ultrasound imaging can be used to improve the quality of ultrasound images. A “synthetic aperture” is the concept in which the successive use of one or more smaller, real apertures (sub-apertures) to examine a VOI, whose phase centers are moved along a known one-dimensional (1D), two-dimensional (2D), and/or three-dimensional (3D) path of a particular or arbitrary shape, is implemented to realize a larger effective (non-real) aperture for acquiring an image. The synthetic aperture can be formed by mechanically altering the spatial position of the electro-acoustic transducer (e.g., transducer array) to the successive beam transmission and/or receiving locations, by electronically altering the phase center of the successive beam transmission and/or receiving locations on the electro-acoustic transducer array, or by a combination of the above. Synthetic aperture-based imaging was originally used in radar systems to image large areas on the ground from aircraft scanning the area of interest from above. Synthetic aperture focusing in ultrasound imaging is based on the geometric distance from the ultrasound transmitting elements to the VOI location and the distance from that location back to the ultrasound receiving element. In ultrasound imaging, the use of the synthetic aperture enables the focusing on a point in the target region by analyzing the received amplitude and phase data of the returned echoes (e.g., mono-static and bi-static echoes), recorded at each of a plurality of transmitter and receiver positions from all directions, to provide information about the entire area. Since the direction of the returned echoes cannot be determined from one receiver channel alone, many receiver channels are used to determine the information contained in the returning echoes, which are processed across some or all of the channels to ultimately render information used to produce the image of the target region. In some implementations of full synthetic transmit aperture imaging, each transmitter within the full set of transmitters can be excited sequentially, separately, in succession, consecutively, and individually. Echoes are recorded on the entire set of receivers for each transmitter spatial location. Considering a set of M transmitters and N receivers, which may or may not share spatial locations, the resulting number of ultrasound echoes equals M×N. For example, for a 128-element ultrasound array, the total number of echoes equals 16384. The echoes are fed into a delay-and-sum beamformer, which is applied to beamform a set of points in space comprising the image, and the resulting image is considered a “gold standard” for spatial resolution. The properties of full synthetic transmit aperture relate to the use of all available spatial samples (e.g., provided by transducer elements) on both transmission and reception combined with the virtual extension of the physical apertures due to the convolution of the transmit aperture with the receive aperture. In the case where the same aperture used for both transmit and receive, the effective aperture is double the size of the physical aperture, thus, decreasing the effective f-number and spatial resolution by a factor of two. FIG.1shows an exemplary image100obtained though full synthetic transmit aperture. This example image is of a CIRS Model 044 ultrasound phantom and was obtained though full synthetic transmit aperture beamforming and has 55.2 dB of dynamic range, which was generated using a Philips/ATL L7-4 linear array operating at 5 MHz connected to a Verasonics ultrasound imaging system. The image depicts three 100 micrometer nylon wire targets (labeled101,103,105), which are visible near 15 mm, 35 mm, and 65 mm depth, respectively. Likewise, the image shows four anechoic targets (labeled107,109,111, and one not shown) are visible near 20 mm, 40 mm, 60 mm and 80 mm depth, respectively. The spatial resolution worsens with increasing depth due to the linearly increasing f-number with depth combined with defocusing of the elevation beam beyond approximately 30 mm. It is well known to those knowledgeable in the field of synthetic aperture imaging that the majority of the spatial samples (e.g., transducer elements) corresponding to a given image point for full synthetic transmit aperture imaging may be redundant and/or may contain largely similar information. In fact, this redundancy is often exploited when the synthetic transmit aperture includes a reduced set of subapertures of two or more contiguous elements in order to improve SNR and speed acquisition, albeit with sacrifices in spatial resolution. Additionally, reduced-redundancy spatial sampling schemes are well known and readily formulated using products of k-space representations of transmit and receive apertures and corresponding transmit receive aperture response through linear convolution in the spatial domain. An important redundancy in synthetic transmit aperture imaging is based on the principle of acoustic reciprocity, e.g., the echo resulting from transmission on element i and reception on element j is practically identical to the echo resulting from transmission on element j and reception on element i, by which approximately half of transmitter and receiver combinations are assumed to be identical. For example, with knowledge of Tx,Rx combination (i,j), Tx,Rx combination (j,i) may be recovered, assumed, and/or replaced. Moreover, it is well known that only 2N−1 out of N2echo samples, i.e., from all possible transmitter and receiver combinations for a given image point, are needed to form a nearly equivalent image. For example, from all Tx,Rx combinations (i,j), the required 2N−1 echo sample required for fully-spatially sampled image formation include combinations where i=j (corresponding to N echo samples) and combinations where i=j+1 (corresponding to N−1 echo samples) for a total of 2N−1 echo samples. However, known techniques for using redundancy in synthetic transmit aperture imaging results in slow acquisition speeds due to the large number of transmits (N). Therefore, an opportunity exists to exploit redundancy in synthetic transmit aperture imaging to speed acquisition from N transmits to significantly less than N transmits. Moreover, the process of transmitting on one element at a time is limited by the round-trip time, which is dictated by the sound speed and the depth-of-interest. Additionally, transmission on one element at a time greatly limits the amount of transmitted energy as compared to focused transmission using more than one element or other modes of coordinated transmission, including, but not limited to, plane wave transmission, virtual source transmission, and subaperture transmission. As such, full synthetic transmit aperture imaging suffers from poor SNR and penetration depth. Coded aperture transmission greatly improves the amount of transmitted energy through the use of sets of orthogonal vectors that encode the transmit aperture. The Hadamard matrix can be used in coded aperture transmission based on a set of linearly independent vectors that is comprised solely of biphase values, −1 and 1. All Hadamard matrices are square with dimensions n×n, where n can be from the set 2k, and k is a non-negative integer. Many other values of n are also known to have Hadamard matrix properties. Let H be a Hadamard matrix of order n. The transpose of H is closely related to its inverse as follows: HHT=nInEq. (1) where Inis the n×n identity matrix and HTis the transpose of H. Equation (1) is due to the fact that the rows and columns of H are all orthogonal vectors over the field of real numbers and each vector has length of √{square root over (n)} (i.e., square root of n). Equation (1) shows that the Hadamard matrix enables perfect separation of all channels provided that each row vector of the Hadamard spatial code is time invariant with respect to all other row or column vectors. Thus, for an aperture having N transmitters and N receivers, the SNR improvement based on Hadamard spatial encoding is given by √{square root over (N)} due to the fact that N transmitters are active versus only 1. Typically, the transducer array is excited using the orthogonal vectors of the Hadamard matrix in the case of a bipolar transmitter (−1, 1); or, the transducer array can be excited using the related binary version of the Hadamard matrix, e.g., the S-matrix (scattering matrix), in the case of a unipolar binary transmitter (0, 1). Although the S-matrix has minor limitations that make it slightly inferior to the Hadamard matrix, it is useful in some applications, for example, when the transmitter output cannot be inverted. The Hadamard matrix enables a zero-delay spatial phase encoding scheme. Thus, the spatial encoding and decoding process assumes that there is inconsequential delay between transmissions of orthogonal vectors. In other words, acoustic echoes are decoded assuming no motion or change occurs between transmissions, and assuming that the only variable between transmissions is the specific row (or column) of H being transmitted. The decoding is independent of the delay-and-sum operation in the beamformer. The method also assumes that the elements of each orthogonal vector are transmitted simultaneously with ideal timing such that there is no delay between transmissions comprising each orthogonal vector. The set of acoustic echoes corresponding to the orthogonal vectors of the Hadamard matrix are thus decoded simultaneously, assuming zero delay or phase between rows or columns or between any elements of the Hadamard matrix. One primary disadvantage that the Hadamard encoding scheme shares with full synthetic transmit aperture transmission is that it requires n transmissions, which limits the true refresh rate to the pulse repetition frequency (PRF) divided by n. The apparent refresh rate is equal to the PRF when the echo set corresponding to the last transmitted Hadamard orthogonal vector is replaced prior to beamformation; however, the complete and proper sampling of motion is limited by the PRF/n, thus resulting in motion blur artifacts for velocities on the order of 1 wavelength times the PRF/n. Another disadvantage of Hadamard spatial encoding is that it is limited to square matrices of specific sizes. Another disadvantage of Hadamard spatial encoding is that it does not utilize temporal coding in order to reduce the acquisition time of the entire set. Hadamard spatial encoding has also been extended to the use of complementary coded waveforms, e.g., Golay coded waveforms, for additional SNR improvement. Nonetheless, the fundamental operation and associated limitations primarily follows that of Hadamard spatial encoding. Hadamard spatial encoding has also been extended to the use of delay encoded transmission instead of phase encoding, albeit with significantly greater decoding complexity. Nonetheless, the fundamental operation and associated limitations primarily follows that of Hadamard spatial encoding. To achieve the best possible imaging speed and resolution, all spatial frequencies must be excited simultaneously or nearly simultaneously in order to mitigate effects of time variance, e.g., tissue motion. Hadamard spatial encoding excites all spatial frequencies, but they are not all excited simultaneously. Only when the linear combination of the entire set of orthogonal vectors is considered (e.g., see Eq. (1)) are all spatial frequencies excited. This is evidenced by the fact that the Fourier transform of the Hadamard matrix is not a constant value for each transmit vector as illustrated inFIG.2Bfor the Hadamard matrix shown inFIG.2A—the transmit vectors are in each row. FIGS.2A and2Bshow diagrams of an example Hadamard matrix corresponding to n=16 with transmit vectors in each row (FIG.2A) and of the discrete Fourier transform for the n=16 Hadamard matrix rows fromFIG.2Awith DC value being leftmost in each row (FIG.2B). For example, as the entire set must be transmitted in order to recover all spatial frequencies, Hadamard spatial encoding is very susceptible to motion artifacts. An encoding strategy that is less susceptible to motion would utilize a spatial encoding scheme that excites all spatial frequencies equally for each transmit vector. Other spatial encoding schemes may be realized that have perfect linear separation similar to Equation (1) with the additional constraint that all spatial frequencies are excited simultaneously. Such strategy may still be subject to the limitation of N transmits, but the redundancy of spatial sampling information will guarantee less susceptibility to motion. Disclosed are techniques, systems, and devices for spatial and temporal encoding of transmission in full synthetic transmit aperture imaging to achieve spatial and contrast resolution for medical imaging with fewer signal transmissions. In some example embodiments, a probe device includes one or more transducer segments including an array of transducer elements, and a probe controller in communication with the array of transducer elements to select a first subset of transducer elements of the array to transmit waveforms, and to select a second subset of transducer elements of the array to receive returned waveforms, wherein the first subset of transducer elements are arranged to transmit the waveforms toward a target volume in a biological subject and the second subset of transducer elements are arranged to receive the returned waveforms that return from at least part of the target volume. The probe device is operable to transmit, at the target volume, spatially and temporally encoded waveforms that include a predetermined (i) unique set of waveforms, (ii) transmit delay pattern, and/or (iii) transmit amplitude and phase pattern; such that, after receiving returned acoustic waveforms from the target, the returned waveforms are decoded by processing waveform components corresponding to each transmit transducer element are separated from the waveforms on each receive transducer element resulting in a set of waveforms representative of a full synthetic transmit aperture acquisition. In some example embodiments, a method for encoding acoustic signal transmissions is disclosed. The method comprises transmitting by a first transducer element, after a time delay associated with the first transducer element, waveforms towards a target volume in a biological subject; receiving by a second transducer element, after a round-trip time between the first transducer element and the second transducer element, returned waveforms that return from at least part of the target volume; identifying the first transducer element that contributes to the returned acoustic waveforms based on the time delay and the round-trip time; and processing the returned waveforms based on the identification of the first transducer element to generate an image of the target volume in the biological subject. FIG.3shows a diagram of an example embodiment of a method200for spatial and temporal encoding of acoustic waveforms in synthetic aperture acoustic imaging in accordance with the disclosed technology. The method200includes a process210to generate a set of spatially and temporally encoded acoustic waveforms for transmission toward a target volume, in which the encoding includes generating one or more of (i) a unique set of encoded waveforms, (ii) a pattern for the transmit delay of the waveforms of the set of waveforms to be transmitted at the target volume, and/or (iii) a transmit amplitude and phase pattern of the set of waveforms to be transmitted at the target volume. The method200includes a process220to coherently transmit waveforms, toward the target volume, on a spatially-sampled aperture formed on an array of transducer elements for one or more transducer segments of an acoustic probe device, in which each transducer element is indexed (e.g., 1, 2, . . . i). The method200includes a process230to receive the returned acoustic waveforms, which are based on the transmitted encoded acoustic signals, on the spatially-sampled aperture, in which each transducer element is indexed j (e.g., 1, 2 . . . j). The method200includes a process240to decode the returned (encoded) acoustic waveforms to isolate the ithtransmission on the jthreception for a set of image points of the target volume. Some example implementations of the decoding process240includes the method240, described later with respect toFIG.24. The method200includes a process250to beamform isolated echo samples for each image point of the set of image points of the target volume, to produce a data set that can be processed to form a beamformed image of the target volume. In some implementations of the method200, the process210includes generating a set of encoded waveforms for transmission. In such implementations, these encoded waveforms are derived from codes, i.e., sets of numbers, with specific properties. For example, a useful property of an encoded waveform is that when decoded, the range lobes are small or close to zero and the amplitude of the decoded waveform is higher than the encoded waveform. The decoding process, for this example, could include range compression or matched filtering. Another example property of two or more encoded waveforms is that the two or more encoded waveforms are orthogonal. For example, given a set of two encoded waveforms, if the first waveform is decoded with the decoding method for the second waveform, the output is ideally zero. Likewise, if the second waveform is decoded with the decoding method for the first waveform, the output is ideally zero. Likewise, the orthogonality obeys linearity and time invariance, e.g., a composite waveform formed from a linear combination of the first and second waveforms through operations including scaling, addition, subtraction, and/or delay may be decoded. Preferably, a unique set of encoded waveforms generated by the process201includes two or more encoded waveforms that are both ideally compressive and ideally orthogonal. Sets of these waveforms may include waveforms that are frequency-coded and/or phase-coded, but such frequency-coding and/or phase-coding are optional, and the unique set of encoded waveforms can include arbitrary waveforms that simultaneously satisfy the properties of range compression and orthogonality. In practice, it is difficult to achieve both properties simultaneously and ideally for more than two waveforms, and thus, tradeoffs must be made. The non-ideal nature of the range compression and/or orthogonality can be reduced by including spatial delay and/or spatial amplitude and phase encoding, in which these techniques can be included in implementations of the process210. FIG.4shows a diagram of an example embodiment of a system300for spatial and temporal encoding of acoustic waveforms in full synthetic transmit aperture acoustic imaging. The system300is operable to implement the method200for spatially and temporally encoding transmit waveforms and decoding the returned encoded waveforms to produce a beamformed image. In some implementations, the system300is operable to generate spatially and temporally encoded waveforms in the form of composite acoustic waveforms that include a spread-spectrum, wide instantaneous bandwidth, coherent, pseudo-random noise characteristics, and coding. The example system300illustrates one of many system designs in accordance with the disclosed technology. As shown in the example ofFIG.4, the system300includes a synthetic aperture acoustic waveform (SAAW) processing device310and an acoustic probe device320in communication with the SAAW processing device310. The system300includes a computer330, in communication with the SAAW processing device310, that includes a processing unit (not shown), a display331and user interface module333to receive data input and display data output for operation of the system300. The computer330can be implemented as one of various data processing architectures, such as a personal computer (PC), laptop, tablet, and mobile communication device architectures. In some examples, the user interface333can include many suitable interfaces including various types of keyboard, mouse, voice command, touch pad, and brain-machine interface apparatuses. The SAAW processing device310includes a system controller313comprising a data processing unit. The data processing unit of the system controller313includes a processor to process data, a memory in communication with the processor to store data, and an input/output unit (I/O) to interface the processor and/or memory to other modules, units or devices of the electronics unit, or external devices. For example, the processor can include a central processing unit (CPU), a microcontroller unit (MCU), or other processor units. For example, the processor can include ASIC (application-specific integrated circuit), FPGA (field-programmable gate array), DSP (digital signal processor), AsAP (asynchronous array of simple processors), and other types of data processing architectures. For example, the memory can include and store processor-executable code, which when executed by the processor, configures the data processing unit to perform various operations, e.g., such as receiving information, commands, and/or data, processing information and data, transmitting or providing information/data to the acoustic probe device320and/or computer330. In some implementations, the data processing unit of the system controller313(and/or the processing unit of the computer330) can transmit raw and/or processed data to a computer system or communication network accessible via the Internet (referred to as ‘the cloud’) that includes one or more remote computational processing devices (e.g., servers in the cloud). To support various functions of the data processing unit, the memory can store information and data, such as instructions, software, values, images, and other data processed or referenced by the processor. For example, various types of Random Access Memory (RAM) devices, Read Only Memory (ROM) devices, Flash Memory devices, and other suitable storage media can be used to implement storage functions of the memory. The I/O of the data processing unit of the system controller313(and/or the processing unit of the computer330) can interface the data processing unit with the wireless communications unit to utilize various types of wired or wireless interfaces compatible with typical data communication standards, for example, which can be used in communications of the data processing unit with other devices, via a wireless transmitter/receiver (Tx/Rx) unit, e.g., including, but not limited to, Bluetooth, Bluetooth low energy, Zigbee, IEEE 802.11, Wireless Local Area Network (WLAN), Wireless Personal Area Network (WPAN), Wireless Wide Area Network (WWAN), WiMAX, IEEE 802.16 (Worldwide Interoperability for Microwave Access (WiMAX)), 3G/4G/LTE cellular communication methods, NFC (Near Field Communication), and parallel interfaces. The I/O of the data processing unit can also interface with other external interfaces, sources of data storage, and/or visual or audio display devices, etc. to retrieve and transfer data and information that can be processed by the processor, stored in the memory, or exhibited on an output unit or an external device. The SAAW processing device310includes a waveform generator311, which can be controlled by the system controller313, to produce one or more digital waveforms in accordance with the disclosed spatially and temporally encoded synthetic acoustic transmit aperture techniques. The waveform generator311includes an array of waveform synthesizers and beam controllers, which generate analog electronic signals corresponding to the one or more digital waveforms that the acoustic probe device transduces as acoustic waveforms, e.g., including the spatially and temporally encoded composite acoustic waveform. The waveform generator311can include a function generator or an arbitrary waveform generator (AWG). For example, the waveform generator311can be configured as an AWG to generate arbitrary digital waveforms for the waveform synthesizer and beam controller to synthesize as individual analog waveforms and/or a composite analog waveform. In some implementations, the waveform generator311can include a memory unit that can store pre-stored waveforms and coefficient data and information used in the generation of a digital waveform. In some implementations, the waveform synthesizer and beam controller of the waveform generator311includes I number of array elements. In one example, the waveform synthesizer and beam controller can be configured to include at least one waveform synthesizer element on each line of the I number of array waveform synthesizers. In another example, the waveform synthesizer and beam controller can include at least one beam controller element on each line of the I number of array beam controllers. In another example, the waveform synthesizer and beam controller can include at least one waveform synthesizer element and beam controller element on each line of the I number of array waveform synthesizers and beam controllers. The waveform synthesizer and beam controller can include a phase-lock loop system for generation of an electronic signal, e.g., a radio frequency (RF) waveform. An exemplary RF waveform can be synthesized by the waveform synthesizer and beam controller from individual waveforms generated in the array elements of the waveform synthesizer and beam controller, e.g., one individual RF waveform can be generated in one array element substantially simultaneously to all other individual waveforms generated by the other array elements of the waveform synthesizer and beam controller. Each individual orthogonal RF waveform can be defined for a particular frequency band, also referred to as a frequency component or ‘chip’, and the waveform properties of each individual orthogonal waveform can be determined by the waveform generator311, which can include at least one amplitude value and at least one phase value corresponding to the chip. The waveform generator311can issue commands and send waveform data including information about each individual orthogonal waveform's properties to the waveform synthesizer and beam controller for generation of individual orthogonal RF waveforms that may be combined together to form a composite RF waveform by the waveform synthesizer and beam controller. In some embodiments, the SAAW processing device310includes an amplifier317to modify the generated waveforms produced at the waveform generator311, e.g., the individual orthogonal RF waveforms and/or the composite RF waveform generated by the waveform synthesizer and beam controller. For example, the amplifier317can include an array of I number of amplifiers, each operable to amplify the gain and/or shifting the phase of a waveform. In some examples, the array of amplifiers is configured as linear amplifiers. While the amplifier317is shown as part of the SAAW processing device310, the amplifier317can also or alternatively be included in the acoustic probe device320. In some embodiments, the system controller313can control some or all of the modules of the system300, e.g., through connection via a control bus. In some embodiments, the system controller313includes a master clock for time synchronization. For example, the master clock can interface with the system controller313and other modules of the system300synchronize operations with each other. In various implementations, for example, the SAAW processing device310is operable to implement the processes210,220,230,240and/or250of the method200. In various implementations, for example, the acoustic probe device320is operable to implement the processes220and/or230of the method200in conjunction with the SAAW processing device310. The acoustic probe device320includes one or more transducer segments that can include an array of transducer elements. The acoustic probe device320includes a probe controller in communication with the one or more transducer segments (e.g., in communication with the array of transducer elements) to select a first subset of transducer elements of the array to transmit waveforms, and to select a second subset of transducer elements of the array to receive returned waveforms. In some implementations, the first subset of transducer elements are arranged to transmit the waveforms toward a target volume in a biological subject (e.g., living organism) and the second subset of transducer elements are arranged to receive the returned waveforms that return from at least part of the target volume, and the waveforms are transmitted in accordance with a predetermined transmit delay pattern such that each of the returned waveforms is distinguishable. In some examples, a transduced acoustic wave can be emitted in the form of an acoustic waveform burst. For example, a selected array element of the example transducer array (of a transducer segment) may generate (e.g., transduct) two or more individual orthogonal acoustic waveforms that correspond to the individual orthogonal waveforms determined by the waveform generator311and combined spatially to form a composite acoustic waveform. As an example, a selected array element may generate (e.g., transduct) one or more composite acoustic waveforms that correspond to the composite waveforms determined by the waveform generator311. In some embodiments, for example, the acoustic probe device320includes a transmit/receive (T/R) switch configured to allow the acoustic probe to utilize the same transducer element(s) in both a transmit and a receive mode. For example, in transmit mode, the exemplary transduced and transmitted spatially and temporally encoded composite acoustic waveform can be transmitted toward a target area from a plurality of positions of the transducer array relative to the target, e.g., biological tissue, in which the transduced and transmitted acoustic waveform forms a spatially combined acoustic waveform. The transmitted spatially and temporally encoded composite acoustic waveform can propagate into the target medium, which for example, can have one or more inhomogeneous mediums that partially transmit and partially reflect the transmitted acoustic waveform. For example, after the acoustic waveform has been transmitted, the T/R switch can be configured into receive mode. The exemplary composite acoustic waveforms that are (at least partially) reflected by the target can be received by the transducer array, e.g., returned spatially and temporally encoded acoustic waveforms. In some examples, a returned acoustic waveform corresponding to the individual orthogonal waveforms (e.g., frequency chips) can be converted to an analog RF waveform. In some examples, selected transducer elements can be configured to receive the returned acoustic waveform(s) corresponding to the transmitted composite waveform and convert it to a composite analog RF waveform. In some implementations, for example, the probe device320can have the beam phase center(s) mechanically translated in one dimension, two dimensions, and/or three dimensions of data sampling/ultrasound scanning positions by spatially moving the transducer array (of the one or more transducer segments) to produce a synthetic aperture during an ultrasound imaging implementation using the system300. Additionally or alternative, in some implementations, for example, the probe device320can remain stationary, and the beam phase center(s) may be translated electronically in one dimension, two dimensions, and/or three dimensions along the stationary transducer array (of the one or more transducer segments) by individually addressing transducer elements sequentially or randomly, e.g., based on control signals from the system controller313, as data sampling/ultrasound scanning positions to produce a synthetic aperture during an ultrasound imaging implementation using the system300. For example, the system300can both mechanically and electronically translate the phase centers in one dimension, two dimensions, and/or three dimensions of data sampling/ultrasound scanning positions to produce a synthetic aperture during an ultrasound imaging implementation. An example embodiment of the one or more transducer segments of the acoustic probe device320is discussed later with respect toFIGS.12and13. The disclosed techniques, systems, and devices present an alternative solution from zero-delay spatial encoding/decoding techniques. The disclosed technology includes a technique for spatially and temporally encoding coherent transmissions on a plurality of ultrasound transducers to achieve partial or full synthetic transmit aperture imaging with fewer transmits than are required of other coded aperture schemes while still maintaining similar spatial resolution and contrast resolution. Consider a transmit aperture that is encoded in waveform, amplitude and phase, and/or delay, and the corresponding acoustic echoes are decoded for each point in space. Using the disclosed technique, each acoustic sample corresponding to a point in space relates to a specific combination of transmitter and receiver that is unique according to its waveform, amplitude and phase, and/or delay, such that when decoded, full synthetic transmit aperture delay-and-sum beamformation results. The disclosed technique is markedly different than spatial Hadamard-based schemes, where the decoding happens across sets of echo samples with the exact same delay across the receive aperture and across orthogonal transmit vectors, independent of image formation. Considering the case of infinite transmit bandwidth and a set of point transmitters, a set of transmission events may overlap a single point in space for only very specific situations. For example, a set of transmit events such that they are all delayed to arrive simultaneously at the same point in space, e.g., geometric focusing to a point. For no other points in space do all transmissions arrive simultaneously aside from well-known spatial sampling conditions that result in aliasing. The echoes from all other locations in space except for the focal point may coincide with one or more transmissions, but they do not overlap or constructively interfere completely. Imaging is amplified at the focal point, and there is no distinction as to which transmitter contributes to which echo sample in a particular receiver. In contrast, for the same impulse and point source transmit situation, the set of transmissions may occur with a unique delay pattern such that the echoes received from a point target in space coincide to the individual transmitters when the received echoes are delayed according to the unique set of delays associated with the transmitters in implementations in accordance with the disclosed techniques. Echoes generated from targets may be considered independent point sources of sound impulses, each arriving at the array of receivers with unique delays according to the unique transmitter delays. All points in space are treated equally, thus enabling imaging of the whole target space, and there is separation of which transmitter contributes to which echo sample in a particular receiver based on the unique combination of transmit delay and round-trip time for a particular combination of transmitter and receiver, thus, all spatial frequencies are excited and potentially recoverable. In some embodiments, transmissions occur on a plurality of transducer elements according to a set of random time delays. Here, the term “random” refers to a set of computer-generated pseudo-random numbers, also referred to herein as random numbers. The random numbers may be generated according to probability distributions including, but not limited to uniform, normal (Gaussian), Cauchy, exponential, and/or chi-squared. Sets of random numbers may be statistically independent, i.e., the sets are statistically uncorrelated. Some sets of random numbers may function better than others, thus choosing, manipulating, and/or optimizing sets of random numbers or sets seeded by random numbers can facilitate a better outcome. Due to the random nature of the transmit delays combined with finite temporal bandwidth and finite spatial bandwidth, unwanted overlap of transmit and receive events coincident with a point in space poses a problem for the method considering a single set of random transmit delays. Considering multiple transmissions of multiple random sets of time delays, for example, the overlapping echoes will occur randomly, and thus, echo samples will be uncorrelated across multiple sets, and thus, more easily rejected in a delay-and-sum beamformer. As each set is statistically independent of the former, the SNR improves monotonically as the number of transmissions tends towards infinity. In a practical application, for example, the number of transmissions cannot be infinity; however, the SNR improves with the square root of the number of independent transmissions. The set of random time delays may be chosen from a uniform random distribution of real numbers spanning a range of delays, for example, spanning real numbers ranging from 0 to 200 wavelengths. The range of delays is primarily limited by transmit to receive crosstalk during transmission, and the corresponding maximum tolerated standoff distance determined by the maximum delay, e.g. 200 wavelengths as in the previous example. Multiple sets of delay values randomly sampled from the same range include a sequence of transmissions that fire sequentially at a specified PRF. The range of delays preferably span from 0 to the maximum tolerated standoff distance. Example implementations of the disclosed spatial and temporal encoding techniques are described below, including example results using various encoded delays. In the example implementations, the array geometry may be suitably optimized to accommodate the required standoff distance, e.g., optimization of the focal distance in elevation for a 1-D linear array geometry. FIGS.5A and5Bshow, respectively, exemplary images obtained through full synthetic transmit aperture and delay encoded synthetic transmit aperture for 128 sets of 128 delays.FIG.5Ashows the same image shown inFIG.1, which is provided here inFIG.5Afor comparative purposes with the image shown inFIG.5B. As previously discussed,FIG.1shows an example of a full synthetic transmit aperture image captured with 55.2 dB of dynamic range for comparison. FIG.5Bshows an example image demonstrating a delay encoded synthetic transmit aperture for 128 sets of 128 delays. The example delay encoded synthetic transmit aperture image shown inFIG.5Bis also of the example CIRS model 044 ultrasound phantom, which was generated using a Philips/ATL L7-5 linear array operating at 5 MHz connected to a Verasonics ultrasound imaging system. The image displays 54.0 dB of dynamic range. The image is the result of coherent summation over 128 sets of random delay encoding vectors spanning 0 to 30 wavelengths. There is slightly reduced contrast in the top anechoic lesion in the top image as compared toFIG.5Aand artifacts above and below each of the three wire targets. Notably, for example, there are some similarities between the images due to equivalent spatial sampling. The top-end range of the absolute image brightness level of 187.0 dB is much greater as compared to 144.8 dB. Also, the noise-free depth-of-penetration is much improved for the delay encoded image, thus revealing the anechoic target (shown in box labeled501) at 80 mm depth, which is not shown inFIG.5A. FIGS.6A and6Bshow, respectively, exemplary images obtained through full synthetic transmit aperture and delay encoded synthetic transmit aperture for 16 sets of 128 delays.FIG.6Aagain shows the full synthetic transmit aperture image (i.e., same asFIG.1) captured with 55.2 dB of dynamic range for comparison withFIG.6B.FIG.6Bshows the delay encoded synthetic transmit aperture image of the example CIRS model 044 ultrasound phantom generated using a Philips/ATL L7-5 linear array operating at 5 MHz connected to a Verasonics ultrasound imaging system. In the image ofFIG.6B, 52.0 dB of dynamic range is displayed. The image ofFIG.6Bis the result of coherent summation over 16 sets of random delay encoding vectors spanning 0 to 30 wavelengths (e.g., compared to 128 delay sets inFIG.5B). As shown in the image, for example, there is reduced contrast in all lesions due to overlapping echoes, though spatial information is largely preserved despite no optimization of the random delay pattern. The noise-free depth-of-penetration is still much improved for the delay encoded image, as compared toFIG.6Afor example, showing the anechoic target (shown in box labeled601) at 80 mm depth, albeit with 8× the frame rate speed. FIGS.7A and7Bshow, respectively, exemplary images obtained through full synthetic transmit aperture and delay encoded synthetic transmit aperture for 128 sets of delays spanning 0 to 1 wavelength.FIG.7Ashows the full synthetic transmit aperture image (same asFIG.1) captured with 55.2 dB of dynamic range for comparison.FIG.7Bshows the delay encoded synthetic transmit aperture image of the example CIRS model 044 ultrasound phantom generated using a Philips/ATL L7-5 linear array operating at 5 MHz connected to a Verasonics ultrasound imaging system. In the image ofFIG.7B, 55.5 dB of dynamic range is displayed. The image ofFIG.7Bis the result of coherent summation over 128 sets of random delay encoding vectors spanning 0 to 1 wavelength. As shown in the image, for example, the contrast in anechoic lesions at 60 mm and 80 mm depth is reduced as compared to the 30 wavelength encoded delays used inFIG.5B. Therefore, the anechoic target501at 80 mm depth inFIG.5Bis not apparent inFIG.7B. FIGS.8A and8Bshow, respectively, exemplary images obtained through full synthetic transmit aperture and delay encoded synthetic transmit aperture for 16 sets of delays spanning 0 to 1 wavelength.FIG.8Ashows the full synthetic transmit aperture image (same asFIG.1) captured with 55.2 dB of dynamic range for comparison.FIG.8Bshows the delay encoded synthetic transmit aperture image of the example CIRS model 044 ultrasound phantom generated using a Philips/ATL L7-5 linear array operating at 5 MHz connected to a Verasonics ultrasound imaging system. In the image ofFIG.8B, 52.5 dB of dynamic range is displayed. The image ofFIG.8Bis the result of coherent summation over 16 sets of random delay encoding vectors spanning 0 to 1 wavelength. As shown in the image, for example, there are slight improvements in artifacts around the wires as compared to 30 wavelength encoded delays used inFIG.6B, but a reduction in contrast in the two deepest lesions at 60 mm and 80 mm. Striping artifacts over depth are due to slight destructive interference from non-optimal delay selection. Also, the noise-free depth-of-penetration is still much improved for the delay encoded image, as compared toFIG.8Afor example, albeit with 8× the frame rate speed. In some embodiments of the disclosed methods, the transmissions are electrically and acoustically isolated from the receiver, e.g., such that the crosstalk results in no perceptible artifacts in the resulting image. The transmit delays may be arbitrary in both space and time. For example, randomly delayed transmissions may proceed at random pulse-repetition intervals independently on all elements. Moreover, the pulse repetition interval need not equal or exceed the round-trip time from transmission to reception as is typically enforced in ultrasound imaging. Additionally, since transmits may be distributed arbitrarily over space and time, some embodiments also include using only one set of transmitters combined with transmit multiplexers to allow arbitrary high speed selection of a transmit element. The transmitters may be optimized to transmit arbitrary waveforms. In some embodiments, the receivers may be free running, e.g., constantly recording echoes that are continuously directed into beamformer hardware. In some embodiments of the disclosed methods, where it may not be possible to electrically isolate transmitters and receivers for simultaneous operation, e.g., using the same array for both transmission and reception, circuitry on all or a subset of the receivers blanks or attenuates the transmit crosstalk signal to below a threshold to reduce image artifacts below the threshold of perception. For example, simultaneous with a transmission, one or more receivers are individually switched off using e.g., a PIN switching diode or similar high speed, high bandwidth switch, thus preventing the transmit signal from saturating the receiver electronics. In some embodiments of the disclosed methods, the ADC outputs of all or a subset of receiver channels may be digitally signaled to zero-out the transmit crosstalk signal that appears coincident with each transmission, with an adjustable delay and duration. In some embodiments of the disclosed methods, prior to beamforming, transmit crosstalk signals are rejected using signal processing. Moreover, the rejected signals may be recovered using signal processing, e.g., through application of interpolation or any method or algorithm useful for estimating the missing samples based on spatially (e.g., reciprocity) and/or temporally correlated signals (e.g., filtering) spanning one or more transmitter and/or receiver combination across one or more independent transmit realizations. In some embodiments of the disclosed methods, in the delay-and-sum beamformer, echo samples corresponding to specific transmitter, receiver, and/or delay combinations that result in an overlapping time of arrival to an image point are rejected, omitted, or weighted based on pre-determined patterns either stored in memory or computed within the beamformer. Moreover, the rejected signals may be recovered using signal processing, e.g., through application of interpolation or any method useful for estimating the missing samples based on spatially and/or temporally correlated signals spanning one or more transmitter and/or receiver combination across one or more independent transmit realizations. Additional spatial encoding is made possible through consideration of both amplitude and/or phase of the transmitted waveforms. Amplitude encoding is accomplished by modulating the amplitude of the transmitted waveform versus element index or spatial element position. Phase encoding is accomplished by modulating the phase of the transmitted waveform versus element index or spatial element position. In some implementations, amplitude encoding and phase encoding can be accomplished in the same process of the method for temporally and spatially encoding acoustic waveforms. For example, a 4-element amplitude encoding sequence, e.g., given by [0.5 1.0 0.0 0.75], as an example, combined with a 4-element binary phase encoding sequence of [1 −1 1 −1] results in a 4-element amplitude and phase encoded sequence of [0.5 −1.0 0.0 −0.75], i.e. resulting from the elementwise product of the amplitude sequence with the phase sequence. As discussed above, the best possible imaging speed and resolution is achieved when all spatial frequencies are excited simultaneously or nearly simultaneously in order to mitigate effects of motion. Delay encoding is a path to nearly simultaneous excitation, yet it may introduce noise from undesirable overlapping echoes that average out as more statistically independent delayed echo samples are averaged. The amplitude and phase of the transmitted waveforms may be varied for each transmission in unique ways such that they encode all spatial frequencies simultaneously and such that they may be decoded exactly or in an approximately exact way with significant SNR gain. For example, for a 4-element aperture, the transmission may have spatial amplitude and phase corresponding to the biphase sequence, [1 −1 1 1], which has a corresponding circular autocorrelation of [4 0 0 0], which is exactly a Kronecker delta function with amplitude 4. Likewise, for an 8-element aperture, the biphase amplitude modulated sequence, e.g., given by [1.00000 −0.91546 0.75184 0.99877 0.91478 0.23430 −0.50953 −0.31760] for example, has a circular autocorrelation given by [4.6531 4.3314e-09 −9.2177e-10 6.1084e-09 1.6652e-08 6.1084e-09 −9.2177e-10 4.3314e-09], which is approximately a Kronecker delta function with amplitude 4.6531. In some implementations, arbitrary length sequences may be numerically optimized to maximize the lag-zero circular autocorrelation and minimize all other lags. An example of a numerically optimized set of 16, length 16 random amplitude and phase transmit spatial encoding vectors is shown inFIG.9A, where transmit vectors are in each row. The corresponding magnitude of the discrete Fourier transform is shown inFIG.9B. DC value is leftmost in each row. Note that the spectra are approximately equal to √{square root over (8)} for all spatial frequencies across all encoding vectors. The decoding properties of each transmit vector are assessed by computing the circular autocorrelation of the transmit vector.FIG.9Cshows the circular autocorrelation of the spatial encodings. The scale is shown in dB, where the maximum is approximately equal to 8 for each encoding vector and the side lobes are below −170 dB for all lags greater than 0, thus an excellent approximation to a Kronecker delta function. Each row vector compresses to an extremely good approximation to a Kronecker delta function with an optimized linear gain of 8 for each length 16 vector. Note that this specific example of an encoding matrix requires a transmitter that allows for arbitrary control of amplitude and phase inversion, which is well within the purview of disclosed technology. In some implementations, the arbitrary length sequences may also be optimized for 2-dimensional matrices to achieve encoded transmission on, for example, 2-D arrays. Moreover, the sequences may be optimized for 3-dimensional matrices to achieve encoded transmission in 2 spatial dimensions plus the time dimension, which may achieve 3-dimensional encoding. The decoding may be applied through direct circular matched filter convolution with the encoding vector/matrix or it may be applied in the frequency domain through the use of the discrete Fourier transform, or equivalently, the fast Fourier transform when computationally preferable. The arbitrary length sequences also have a close relationship to uniformly redundant arrays (URAs), which are mask patterns primarily applied to optical imaging. The URAs are binary, and they share similar Kronecker delta properties when correlated with their matched pattern. The URAs essentially enable pinhole-like imaging resolution using a much larger aperture, thus, much more received light and higher SNR. The URAs are primarily limited to far field imaging; however, ultrasound imaging is well known to occur in the near field of an aperture. The disclosed techniques have the unique property of leveraging a far field spatial encoding strategy to address a near field imaging problem that has not been contemplated before in ultrasound imaging. As the delay component of delay-and-sum beamformer transforms near field echoes into their far field equivalents, the spatial decoding is applied to delayed echo samples prior to summation. The disclosed techniques can apply decoding prior to summation at the sample delay step of the delay-and-sum beamformer, i.e. the decoding is applied to echo samples with different delays, which represents a radical departure from traditional spatial encoding/decoding vis a vis Hadamard spatial encoding, where the decoding is applied to echo samples with the same delay. In some embodiments, the encoding vectors may be complementary. For example, the encoding and decoding vectors may not be identical, however, their circular cross-correlation results in a Kronecker delta function, while their individual circular autocorrelations are not Kronecker delta functions. This may also provide for obfuscation of the observable encoding vector from reverse engineering that provides an alternative to other obfuscating techniques such as vector scrambling, convolution with other random vectors, etc. Example implementations of the disclosed amplitude and phase and delay encoding strategy was tested in an example simulation for point 9 targets, for a 128 sets of amplitude and phase encoding vectors combined with 128 sets of random delay encoding vectors spanning 0 to 227.5 wavelengths. The simulated array used a Philips L7-4 linear array operating at 5 MHz, and the simulation was performed using the Verasonics imaging system software simulator. FIGS.10and11show exemplary images of 9 point targets beamformed with delay encoding only (FIG.10) and with delay encoding combined with amplitude and phase encoding (FIG.11).FIG.10shows an image of 9 point targets beamformed with delay encoding only, in which 100 dB dynamic range is displayed. Note artifacts (e.g., scattering grey pixels) due to echo overlap in the lateral dimension (labeled801) and depth dimension (labeled803).FIG.11shows an image of 9 point targets beamformed with delay encoding combined with amplitude and phase encoding, in which 100 dB dynamic range is displayed. As shown inFIG.11, for example, the artifacts are greatly reduced in the lateral dimension (labeled805), and there is artifact reduction in the depth dimension (labeled807), e.g., due to amplitude and phase spatial encoding. Also, it is noted that there is greater absolute image magnitude of 123.2 dB vs. 117.9 dB. In some embodiments, combined encoding can be implemented, as well. A combination of amplitude, phase, and delay encoding may be utilized to improve the speed of data acquisition and reduce image artifacts in all aforementioned embodiments of spatial delay encoding. For example, for a given field-of-view, the encoding delays may be optimized to minimize the average occurrence of overlapping echoes across the entire image. Additionally, refinements in the interframe and intraframe post-image processing can have a major impact on improving image quality without significant changes to the embodiments as disclosed. The disclosed methods and systems are fully compatible with coded waveforms, and they will likely benefit from channel isolation and waveform diversity aspects of coded waveform transmission. FIG.12shows a block diagram of one example synthetic transmit aperture acoustic system that can accommodate the disclosed technology. As shown inFIG.12, the system includes a transmit/receive electronics module910in electrical communication with an acoustic probe device920and with a data processing unit or computer930. The transmit/receive electronics module910is configured to generate the individual coded waveforms on multiple channels transferred to the probe device920for transmitting and receiving one or more composite waveforms (e.g., coherent, spread-spectrum, instantaneous-wideband, coded waveforms) based on the individual-generated coded waveforms. The probe device920includes a probe controller unit in communication with a probe interface unit that is in communication with each probe transducer segments. For transmit, the probe controller is operable to receive the waveform information from the transmit/receive electronics module910of the generated discrete waveforms carried on the multiple communication channels, which are transduced by the transducer elements on the probe transducer segments. The probe interface includes circuitry to route the waveform signals to selected transducer elements. The probe device920can include one transducer segment or an array of multiple transducer segments arranged on a section of the housing body having a particular geometry that makes contact with a body structure of the subject. In some embodiments, for example, the section can include a flat shape, whereas in other embodiments, the section can include a curved shape. FIG.13shows a diagram of exemplary composite ultrasound beams generated by transducer sub-arrays on multiple transducer segments that forms a synthetic transmit aperture beam from multiple transmitting positions along a 180° curvature of the probe920. As shown in the diagram, a probe920includes multiple transducer segments used to form one or more real aperture sub-arrays Sub 1, Sub 2, . . . Sub N on one or more of the transducer segments. Some or all of the transducer elements that form the transducer array can transmit (e.g., either sequentially, simultaneously or randomly) one or more composite acoustic waveforms of individual, mutually orthogonal, coded acoustic waveforms transmitted to a target from multiple sub-array phase center positions to form a synthetic transmit aperture for ultrasound imaging. In some implementations, different transducer elements on the transducer segments can be selected to form the receive array to receive the returned acoustic waveforms corresponding to the transmitted acoustic waveform (formed based on the individual, mutually orthogonal, coded acoustic waveforms), in which the received acoustic waveforms are scattered back and returned (e.g., reflected, refracted, diffracted, delayed, and/or attenuated) from at least part of the target. Whereas, in some implementations, some or all of the transducer elements that form the transmit array can also receive the returned acoustic waveforms corresponding to the transmitted acoustic waveform. The received individual acoustic waveforms thereby form one or more received composite waveforms that correspond to the transmitted composite acoustic waveforms. FIG.14shows a mathematical expression of encoded transmission on an arbitrary set of array elements. The waveform, txi(t), drives itharray element with waveform encoding function, wfi(t), amplitude and phase encoding vector, αi, and delay encoding vector, τei. The echo waveform rxj(t) is received from the jtharray element. All or a subset of array elements are driven coherently in the same transmission event. FIG.15shows an exemplary depiction of coherent transmission from 16 array elements into a medium with propagation speed, c. The depicted wavefronts each emanate from a single element with constant delay across the aperture. The each wavefront may correspond to a unique waveform encoding and/or amplitude and phase encoding. FIG.16shows an exemplary depiction of coherent transmission from 16 array elements into a medium with propagation speed, c. The depicted wavefronts each emanate from a single element with random delay encoding across the aperture. The each wavefront may also correspond to a unique waveform encoding and/or amplitude and phase encoding. FIG.17shows the beamforming geometry for an arbitrary set of array elements. The vectors {right arrow over (r)}i, {right arrow over (r)}j, {right arrow over (r)}pare the 3D vector positions of the transmit element, receive element, and image point, p, respectively, relative to an origin. In the beamformer, geometric focusing delays are computed according to the roundtrip distance from the ithtransmission element to the image point and back to the jthreception element divided by the medium speed, c, as follows: τp⁡(i,j)=r→p-r→i+r→p-r→jcEq.⁢(2) where the || operator denotes Euclidean distance of the enclosed vector and τp(i, j) is the focusing delay for point p corresponding to transmit element i and receive element j. Equation (2) is a summary of the delay calculation in a delay-and-sum beamformer. FIG.18shows exemplary set of 16 independent encoding waveforms each with a center frequency of 5 MHz and a −6 dB fractional bandwidth of 70% and each having nearly ideal linear autocorrelation properties, for example, range lobes below −60 dB. FIG.19shows an exemplary sequence of uniformly randomly distributed encoding delays ranging from 0.196 microseconds to 2.56 microseconds. FIG.20shows the exemplary set of 16 encoding waveforms shown inFIG.18with delay encoding as shown inFIG.19. FIG.21shows an exemplary sequence of 16 amplitude and phase encoding values with nearly ideal circular autocorrelation properties. FIG.22shows the circular autocorrelation of the sequence shown inFIG.20, illustrating the nearly ideal Kronecker delta properties of the sequence with non-zero lag values less than 2.09e-08 and a linear gain of 8 at lag zero. FIG.23shows the exemplary set of 16 encoding waveforms shown inFIG.20with amplitude and phase encoding as shown inFIG.21. FIG.24shows a diagram of an example embodiment for a decoding method2400in accordance with the present technology. In some implementations, for example, the method2400can be implemented at the process240of the method200. In various implementations of the method2400, e.g., depending on which encoding strategies are utilized, the exemplary decoding method2400can include up to three stages comprised of (i) coded waveform decoding (e.g., decoding the unique set of encoded waveforms, which can include arbitrary waveforms that simultaneously satisfy properties of range compression and orthogonality, and/or frequency-coded and/or phase-coded waveforms), (ii) transmit delay pattern decoding, and (iii) transmit amplitude and phase pattern decoding, in which the coded waveform decoding stage, the transmit delay pattern decoding stage, and/or the transmit amplitude and phase pattern decoding stage is selected based on which of the respective encoding techniques is employed, e.g., at the process210of the method200. After receiving encoded acoustic signals, at2410in the diagram ofFIG.24, the decoding method2400can include a first decoding phase, which in this example implements a process2420to decode coded waveforms. The decoding method2400can include a second decoding phase, which in this example implements a process2430to decode transmit delays. The decoding method2400can include a third decoding phase, which in this example implements a process2440to decode amplitudes and phases. The decoding method2400can include a beamforming process2450. The processes of the example decoding method are described below. The example procedure shown inFIG.24presents one order of the decoding stages of the method2400, including decoding encoded waveforms, decoding transmit delays, and decoding amplitude and phase; however, the procedure is not limited to a specific order of decoding or a specific method of decoding or a specific decoding algorithm. In the exemplary decoding method2400shown inFIG.24, the first stage includes waveform decoding, in which the jthreceived echo is filtered with a time-reversed and conjugated version of the ithtransmitted waveform, resulting in a partially decoded set of waveforms as follows: rxij(d1)(t)=rxj(t)*wfi*(−t)  Eq. (3) where the * operator denotes convolution, the * operator denotes conjugation, and the superscript (d1) denotes the first decoding. Note that the waveform received on element j due to transmission on element i is given by rxij(d1)(t), e.g. there is now a form of separation between the echo components in the jthreceived echo corresponding to the ithtransmission. Here, rij(d1)is a 2D beamformer sample matrix where the ithrow is referenced according to transmit index i and the jthcolumn is referenced according to receive index j. The second stage of the decoding method2400can include a delay decoding process. The output of the waveform decoding stage is delayed for an image point p using geometry shown inFIG.17according to the delay calculation in Equation (2) in addition to the encoding delay τeishown inFIG.14according to the following: rxij(d2)=rxij(d1)(τp(i,j)+τei)  Eq. (4) where the superscript (d2) denotes the second decoding. Here, rxij(d2)is a 2D beamformer sample matrix where the ithrow is referenced according to transmit index i and the jthcolumn is referenced according to receive index j. The third stage of the decoding method2400can include amplitude and phase decoding. The output of the delay decoding stage is decoded with a function fα(X), which is a function of the the amplitude and phase encoding vector αi, resulting in a three-times decoded set of echo as follows: rxij(d3)=fα(rxij(d2))  Eq. (5) where in one possible embodiment, fα(X) is the circular correlation between the column vector αiand each column of X, where X is a 2D matrix. Here, rxij(d3)is a 2D beamformer sample matrix where the ithrow is referenced according to transmit index i and the jthcolumn is referenced according to receive index j. In the exemplary embodiment, the beamformed sample for point p is obtained by a weighted summation over all decoded transmitter and receiver combinations as follows: bp=∑i⁢∑j⁢wp⁡(i,j)⁢r⁢xi⁢j(d⁢3)Eq.⁢(6) where the weighting or apodization function wp(i,j) is a function of the image point p, transmission element i, and reception element j. The beamformed sample bp, may be obtained by combining the decoded echo samples other ways, for example, using a nonlinear and/or adaptive beamformer. In the exemplary embodiment, the beamformed sample bpmay be obtained for multiple independent transmissions where each transmission utilizes an independent set of encoding waveforms, encoding amplitude and phase, and/or encoding delays. Denoting the index of the transmission event as k, and the beamformed sample for each transmission as bpk, the beamformed sample from multiple transmissions may be found by summing over multiple transmissions as follows: b^p=∑k⁢bpkEq.⁢(7) where {circumflex over (b)}pdenotes an estimated version of bp. Likewise, the beamformed sample sequence bpkmay also be filtered using a finite impulse response (FIR) and/or infinite impulse response filter (IIR) and/or a nonlinear filter such as a windowed median filter and/or a statistically optimal filter such as a Kalman filter. Although the aforementioned encoding and decoding scheme was implicitly described for a 1D array, it may be extended to any geometry by simply applying the appropriate array element indexing scheme. In some example implementations, an optimization may be performed to fine tune the entire encoding and decoding process. For example, encoding waveforms, encoding amplitudes, and/or encoding delays may be numerically varied by an optimizer to minimize the value of an objection function. The objective function would seek to minimize image artifacts in the encoded synthetic transmission aperture image given by IESTArelative to an ideally beamformed image based on full synthetic transmission aperture given by IFSTA. For example, a nonlinear optimization defined as follows: argminaiw⁢⁢f⁢i⁢(t)τ⁢⁢e:⁢∑IF⁢S⁢T⁢A-IE⁢S⁢T⁢A⁡(ai,w⁢⁢fi⁡(t),⁢τ⁢ei)2Eq.⁢(8) where the summation is taken over the magnitude squared of all image samples. The nonlinear optimizer solves for the best encoding parameters given a fixed decoding procedure, for example, according to the previously described decoding procedure. The example optimization may also be performed over unique sets of αi, wfi(t), and τeiand corresponding images IESTA. The example optimization may be accomplished using nonlinear machine learning algorithms. For example, a set of encoding parameters is learned using a machine learning algorithm such that the error between the training image set based on full synthetic aperture and the encoded synthetic aperture image set is minimized. The example optimization may be accomplished online using an imaging system within the optimization or machine learning loop to generate both the training image set and the output image set. The example optimization may be accomplished offline using a full synthetic aperture data set with artificially imposed waveform encoding, amplitude and phase encoding, and delay encoding. EXAMPLES In some embodiments in accordance with the present technology (example A1), a probe device to interface a body structure of a biological subject includes one or more transducer segments comprising an array of transducer elements, and a probe controller in communication with the array of transducer elements to select a first subset of transducer elements of the array to transmit waveforms, and to select a second subset of transducer elements of the array to receive returned waveforms, in which the first subset of transducer elements are arranged to transmit the waveforms toward a target volume in the biological subject and the second subset of transducer elements are arranged to receive the returned waveforms that return from at least part of the target volume, and the waveforms are transmitted in accordance with a predetermined transmit delay pattern. The probe device is operable to transmit, at the target volume, spatially and temporally encoded waveforms that include a predetermined (i) unique set of waveforms, (ii) transmit delay pattern, and/or (iii) transmit amplitude and phase pattern; such that, after receiving returned acoustic waveforms from the target, the returned waveforms are decoded by processing waveform components corresponding to each transmit transducer element are separated from the waveforms on each receive transducer element resulting in a set of waveforms representative of a full synthetic transmit aperture acquisition. Example A2 includes the probe device of example A1, wherein the predetermined transmit delay pattern comprises a set of random time delays. Example A3 includes the probe device of example A2, wherein the set of random time delays is a uniform distribution of random values within a range spanning from zero to a maximum tolerated standoff distance of the array of transducer elements. Example A4 includes the probe device of example A1, wherein the first subset of transducer elements is different from the second subset of transducer elements. Example A5 includes the probe device of example A1, wherein the first subset of transducer elements is the same as the second subset of transducer elements. Example A6 includes the probe device of example A5, wherein the second subset of transducer elements attenuates a transmit crosstalk signal to reduce image artifacts. Example A7 includes the probe device of example A1, wherein the waveforms have different amplitudes for each transmission. Example A8 includes the probe device of example A1, wherein waveforms have different phases for each transmission. Example A9 includes the probe device of example A1, wherein different waveforms are used for each transmission. In some embodiments in accordance with the present technology (example A10), a method of signal transmission includes transmitting by a first transducer element, after a time delay associated with the first transducer element, waveforms towards a target volume in a biological subject; receiving by a second transducer element, after a round-trip time between the first transducer element and the second transducer element, returned waveforms that return from at least part of the target volume; identifying the first transducer element that contributes to the returned acoustic waveforms based on the time delay and the round-trip time; and processing the returned waveforms based on the identification of the first transducer element to generate an image of the target volume in the biological subject. Example A11 includes the method of example A10, wherein the time delay is selected from a set of random time delays. Example A12 includes the method of example A11, wherein the set of random time delays is a uniform distribution of random values within a range spanning from zero to a maximum tolerated standoff distance of the first and second transducer elements. Example A13 includes the method of example A10, wherein the first transducer element is different from the second transducer element. Example A14 includes the method of example A10, wherein the first transducer element is the same as the second transducer element. Example A15 includes the method of example A14, wherein the second transducer element attenuates a transmit crosstalk signal to reduce image artifacts. Example A16 includes the method of example A10, wherein the waveforms have different amplitudes for each transmission. Example A17 includes the method of example A10, wherein waveforms have different phases for each transmission. Example A18 includes the method of example A10, wherein different waveforms are used for each transmission. In some embodiments in accordance with the present technology (example B1), a method for spatial and temporal encoding of acoustic waveforms in synthetic aperture acoustic imaging includes generating a set of spatially and temporally encoded acoustic waveforms for transmission toward a target volume that includes generating one or more of (i) a unique set of coded waveforms, (ii) a transmit delay pattern of time delays for acoustic waveforms to be transmitted at the target volume, or (iii) a transmit amplitude and phase pattern of the acoustic waveforms to be transmitted at the target volume; coherently transmitting the spatially and temporally encoded acoustic waveforms, toward the target volume, using a spatially-sampled aperture formed on an array of transducer elements for one or more transducer segments of an acoustic probe device, wherein each transducer element used in the transmitting is assigned a first index number 1 to i, wherein i is a number equal to or less than a total number of transducer elements; receiving returned encoded acoustic waveforms on the spatially-sampled aperture, wherein the wherein the transducer elements are assigned a second index number 1 to j, wherein j is a number equal to or less than a total number of transducer elements; decoding the returned encoded acoustic waveforms to isolate the ithtransmission on the jthreception that correspond to a set of image points of the target volume; and processing the decoded returned encoded acoustic waveforms to beamform isolated echo samples for each image point of the set of image points of the target volume. Example B2 includes the method of example B1, further comprising forming image of the target volume by processing data associated with the beamformed isolated echo samples. Example B3 includes the method of example B1, wherein each time delay in the transmit delay pattern for the acoustic waveforms to be transmitted is selected from a set of random time delays. Example B4 includes the method of example B3, wherein the set of random time delays includes a uniform distribution of random values within a range spanning from zero to a maximum tolerated standoff distance between two or more transducer elements. Example B5 includes the method of example B1, wherein the generating the transmit delay pattern of time delays for acoustic waveforms includes generating randomly delayed transmission times to allow transmission of the acoustic waveforms at random pulse-repetition intervals independently on all transducer elements of the array for one or more transducer segments. Example B6 includes the method of example B1, wherein the generating the transmit amplitude and phase pattern of the acoustic waveforms includes modulating an amplitude and modulating a phase for each acoustic waveform to be transmitted with respect to a transducer element index or a spatial position of the transducer element. Example B7 includes the method of example B1, wherein the encoded acoustic waveforms have different amplitudes for each transmission. Example B8 includes the method of example B1 wherein encoded acoustic waveforms have different phases for each transmission. Example B9 includes the method of example B1, wherein the unique set of coded waveforms include arbitrary waveforms that simultaneously satisfy properties of range compression and orthogonality. In some embodiments in accordance with the present technology (example B10), an acoustic probe device to interface a body structure of a biological subject includes one or more transducer segments comprising an array of transducer elements; and a probe controller in communication with the array of transducer elements to select a first subset of transducer elements of the array to transmit acoustic waveforms, and to select a second subset of transducer elements of the array to receive returned acoustic waveforms, wherein the first subset of transducer elements are arranged to transmit the acoustic waveforms toward a target volume in the biological subject and the second subset of transducer elements are arranged to receive the returned acoustic waveforms that return from at least part of the target volume, wherein the probe device is operable to transmit the acoustic waveforms in accordance with a predetermined transmit delay pattern that spatially and temporally encodes transmit waveforms such that each of the returned acoustic waveforms is distinguishable from another. Example B11 includes the device of example B10, wherein the predetermined transmit delay pattern comprises a set of random time delays. Example B12 includes the device of example B11, wherein the set of random time delays includes a uniform distribution of random values within a range spanning from zero to a maximum tolerated standoff distance of the array of transducer elements. Example B13 includes the device of example B10, wherein the first subset of transducer elements is different from the second subset of transducer elements. Example B14 includes the device of example B10, wherein the first subset of transducer elements is the same as the second subset of transducer elements. Example B15 includes the device of example B14, wherein the second subset of transducer elements attenuates a transmit crosstalk signal to reduce image artifacts. Example B16 includes the device of example B10, wherein the acoustic waveforms have different amplitudes for each transmission. Example B17 includes the device of example B10, wherein acoustic waveforms have different phases for each transmission. Example B18 includes the device of example B10, wherein different frequency-coded or phase-coded waveforms are used for each transmission. In some embodiments in accordance with the present technology (example B19), a method of signal transmission includes transmitting by a first transducer element, after a time delay associated with the first transducer element, acoustic waveforms towards a target volume in a biological subject; receiving by a second transducer element, after a round-trip time between the first transducer element and the second transducer element, returned acoustic waveforms that return from at least part of the target volume; identifying the first transducer element that contributes to the returned acoustic waveforms based on the time delay and the round-trip time; and processing the returned acoustic waveforms based on the identification of the first transducer element to generate an image of the target volume in the biological subject. Example B20 includes the method of example B19, wherein the time delay is selected from a set of random time delays. Example B21 includes the method of example B20, wherein the set of random time delays includes a uniform distribution of random values within a range spanning from zero to a maximum tolerated standoff distance of the first and second transducer elements. Example B22 includes the method of example B19, wherein the first transducer element is different from the second transducer element. Example B23 includes the method of example B19, wherein the first transducer element is the same as the second transducer element. Example B24 includes the method of example B23, wherein the second transducer element attenuates a transmit crosstalk signal to reduce image artifacts. Example B25 includes the method of example B19, wherein the acoustic waveforms have different amplitudes for each transmission. Example B26 includes the method of example B19, wherein acoustic waveforms have different phases for each transmission. Example B27 includes the method of example B19, wherein different frequency-coded or phase-coded waveforms are used for each transmission. In this description, the word “exemplary” is used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word exemplary is intended to present concepts in a concrete manner. It is intended that the specification, together with the drawings, be considered exemplary only, where exemplary means an example. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. Additionally, the use of “or” is intended to include “and/or”, unless the context clearly indicates otherwise. While this patent document contains many specifics, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this patent document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination. Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Moreover, the separation of various system components in the embodiments described in this patent document should not be understood as requiring such separation in all embodiments. Only a few implementations and examples are described and other implementations, enhancements and variations can be made based on what is described and illustrated in this patent document.
83,676
11860274
DETAILED DESCRIPTION Hereinafter, an embodiment and a modification disclosed here will be described with reference to the drawings. Configurations of the embodiment and the modification described below and actions and effects provided by the configurations are merely examples, and are not limited to the following description. Embodiment FIG.1is an exemplary and schematic view illustrating an appearance of a vehicle1including an object detection system according to an embodiment when viewed from above. As described below, the object detection system according to the embodiment is an in-vehicle sensor system that performs a transmission and reception of a sound wave (ultrasonic wave) and acquires a time difference or the like of the transmission and reception, thereby detecting information related to an object (for example, an obstacle O illustrated inFIG.2to be described later) including a person present around the object detection system. More specifically, as illustrated inFIG.1, the object detection system according to the embodiment includes an electronic control unit (ECU)100as an in-vehicle control device and object detection devices201to204as in-vehicle sonars. The ECU100is mounted inside the four-wheel vehicle1including a pair of front wheels3F and a pair of rear wheels3R, and the object detection devices201to204are mounted on an exterior of the vehicle1. In the example illustrated inFIG.1, as an example, the object detection devices201to204are installed at different positions on a rear end portion (rear bumper) of a vehicle body2as the exterior of the vehicle1, but the installation positions of the object detection devices201to204are not limited to the example illustrated inFIG.1. For example, the object detection devices201to204may be installed on a front end portion (front bumper) of the vehicle body2, may be installed on a side surface portion of the vehicle body2, or may be installed on two or more of the rear end portion, the front end portion, and the side surface portion. Further, in the embodiment, hardware configurations and functions of the object detection devices201to204are the same as each other. Therefore, in the following description, the object detection devices201to204may be collectively referred to as an object detection device200for simplification of description. Further, in the embodiment, the number of object detection devices200is not limited to four as illustrated inFIG.1. FIG.2is an exemplary and schematic block diagram illustrating a hardware configuration of the object detection system according to the embodiment. As illustrated inFIG.2, the ECU100has a hardware configuration similar to that of a normal computer. More specifically, the ECU100includes an input and output device110, a storage device120, and a processor130. The input and output device110is an interface for implementing the transmission and reception of information between the ECU100and an outside. For example, in the example illustrated inFIG.2, communication partners of the ECU100are the object detection device200and a temperature sensor50. The temperature sensor50is mounted on the vehicle1so as to measure a temperature around the vehicle1(target environment). The storage device120includes a main storage device such as a read only memory (ROM) or a random access memory (RAM), and/or an auxiliary storage device such as a hard disk drive (HDD) or a solid state drive (SSD). The processor130manages various processing executed by the ECU100. The processor130includes an arithmetic unit, for example, a central processing unit (CPU), and the like. The processor130reads and executes a computer program stored in the storage device120, thereby implementing various functions, for example, automatic parking, and the like. As illustrated inFIG.2, the object detection device200includes a transceiver210and a controller220. The transceiver210includes a vibrator211including a piezoelectric element or the like, and the transmission and reception of the ultrasonic wave is implemented by the vibrator211. More specifically, the transceiver210transmits, as a transmission wave, an ultrasonic wave generated in accordance with a vibration of the vibrator211, and receives, as a reception wave, the vibration of the vibrator211caused by the ultrasonic wave transmitted as the transmission wave being reflected by an object present outside and returned. In the example illustrated inFIG.2, the obstacle O installed on a road surface RS is illustrated as the object that reflects the ultrasonic wave from the transceiver210. In the example illustrated inFIG.2, a configuration in which both the transmission of the transmission wave and the reception of the reception wave are implemented by the single transceiver210including the single vibrator211is illustrated. However, the technique of the embodiment is also applicable to a configuration in which a configuration on a transmission side and a configuration on a reception side are separated, for example, a configuration in which a first vibrator for transmitting the transmission wave and a second vibrator for receiving the reception wave are separately installed. The controller220has a hardware configuration similar to that of a normal computer. More specifically, the controller220includes an input and output device221, a storage device222, and a processor223. The input and output device221is an interface for implementing the transmission and reception of information between the controller220and an outside (the ECU100and the transceiver210in the example illustrated inFIG.1). The storage device222includes a main storage device such as a ROM or a RAM, and/or an auxiliary storage device such as an HDD or an SSD. The processor223manages various processing executed by the controller220. The processor223includes an arithmetic unit, for example, a CPU, and the like. The processor223reads and executes a computer program stored in a storage device333, thereby implementing various functions. Here, the object detection device200according to the embodiment detects a distance to the object by a technique referred to as a time of flight (TOF) method. As described in detail below, the TOF method is a technique of calculating a distance to the object in consideration of a difference between a timing at which the transmission wave is transmitted (more specifically, the transmission is started) and a timing at which the reception wave is received (more specifically, the reception is started). FIG.3is an exemplary and schematic diagram illustrating an outline of a technique used by the object detection device200according to the embodiment to detect a distance to the object. More specifically,FIG.3is an exemplary and schematic diagram represented in a graph form that illustrates a temporal change in a signal level (for example, amplitude) of the ultrasonic wave transmitted and received by the object detection device200according to the embodiment. In the graph illustrated inFIG.3, a horizontal axis corresponds to a time, and a vertical axis corresponds to a signal level of a signal transmitted and received by the object detection device200via the transceiver210(the vibrator211). In the graph illustrated inFIG.3, a solid line L11represents an example of an envelope curve representing a temporal change in the signal level of the signal transmitted and received by the object detection device200, that is, a vibration degree of the vibrator211. It can be seen from the solid line L11that by driving and vibrating the vibrator211for a time Ta from a timing t0, the transmission of the transmission wave is completed at a timing t1, and then during a time Tb until a timing t2, the vibration of the vibrator211due to inertia continues while attenuating. Therefore, in the graph illustrated inFIG.3, the time Tb corresponds to a so-called reverberation time. The solid line L11reaches a peak at which the vibration degree of the vibrator211exceeds (or equal to or more than) a predetermined threshold Th1represented by a one-dot chain line L21at a timing t4at which a time Tp elapses from the timing t0at which the transmission of the transmission wave is started. The threshold Th1is a value set in advance for identifying whether the vibration of the vibrator211is caused by a reception of a reception wave serving as a transmission wave reflected by a detection target object (for example, the obstacle O illustrated inFIG.2) and returned or is caused by a reception of a reception wave serving as a transmission wave reflected by a non-detection target object (for example, the road surface RS illustrated inFIG.2) and returned. FIG.3illustrates an example in which the threshold Th1is set as a constant value that does not change as the time elapses, but in the embodiment, the threshold Th1may be set as a value that changes as the time elapses. Here, the vibration having a peak exceeding (or equal to or more than) the threshold Th1can be considered to be caused by the reception of the reception wave serving as the transmission wave reflected by the detection target object and returned. On the other hand, the vibration having a peak equal to or lower than (or less than) the threshold Th1can be considered to be caused by the reception of the reception wave serving as the transmission wave reflected by the non-detection target object and returned. Therefore, based on the solid line L11, it can be seen that the vibration of the vibrator211at the timing t4is caused by the reception of the reception wave serving as the transmission wave reflected by the detection target object and returned. Further, in the solid line L11, the vibration of the vibrator211is attenuated after the timing t4. Therefore, the timing t4corresponds to a timing at which the reception of the reception wave serving as the transmission wave reflected by the detection target object and returned is completed, in other words, a timing at which the transmission wave last transmitted at the timing t1returns as the reception wave. Further, in the solid line L11, a timing t3as a start point of the peak at the timing t4corresponds to a timing at which the reception of the reception wave serving as the transmission wave reflected by the detection target object and returned is started, in other words, a timing at which the transmission wave first transmitted at the timing t0returns as the reception wave. Therefore, in the solid line L11, a time ΔT between the timing t3and the timing t4is equal to the time Ta as a transmission time of the transmission wave. Based on the above, in order to obtain a distance to the detection target object by the TOF method, it is necessary to obtain a time Tf between the timing t0at which the transmission wave starts to be transmitted and the timing t3at which the reception wave starts to be received. The time Tf can be obtained by subtracting the time ΔT equal to the time Ta as the transmission time of the transmission wave from the time Tp as a difference between the timing t0and the timing t4at which the signal level of the reception wave reaches the peak exceeding the threshold Th1. The timing t0at which the transmission wave starts to be transmitted can be easily specified as a timing at which the object detection device200starts to operate, and the time Ta as the transmission time of the transmission wave is determined in advance by setting or the like. Therefore, in order to obtain the distance to the detection target object by the TOF method, it is important to specify the timing t4at which the signal level of the reception wave reaches the peak exceeding the threshold Th1. In order to specify the timing t4, it is important to accurately detect a correspondence between the transmission wave and the reception wave serving as the transmission wave reflected by the detection target object and returned. Further, as described above, in general, the transmission and reception waves are absorbed and attenuated due to a temperature and humidity of a medium through which the transmission wave and the reception wave propagate. Therefore, for example, in the related art, there is a technique that uses temperature data inside and outside a vehicle interior from a temperature sensor and humidity data inside the vehicle interior from a humidity sensor to estimate the humidity outside the vehicle interior and correct the absorption and the attenuation of the transmission and reception waves. According to this technique, even if there is no humidity sensor for detecting the humidity outside the vehicle interior, the correction can be performed. However, in the related art described above, since the humidity outside the vehicle interior is estimated based on the humidity inside the vehicle interior to correct the absorption and the attenuation, the accuracy may be low. Further, it would be meaningful if an absorption and attenuation value could be estimated with high accuracy in a case in which at least one of the temperature and the humidity of a target environment is not known in performing the correction and setting a threshold of a CFAR signal. Therefore, in the embodiment, the object detection device200is configured as described below, and thus the absorption and attenuation value can be estimated with high accuracy in a case of performing CFAR processing. Hereinafter, the object detection device200will be described in detail. FIG.4is an exemplary and schematic block diagram illustrating a detailed configuration of the object detection device200according to the embodiment. InFIG.4, the configuration on the transmission side and the configuration on the reception side are illustrated in a separated state, but such an aspect illustrated in the drawing is merely for convenience of description. Therefore, in the embodiment, as described above, both the transmission of the transmission wave and the reception of the reception wave are implemented by the single transceiver210. However, as described above, the technique of the embodiment is also applicable to the configuration in which the configuration on the transmission side and the configuration on the reception side are separated from each other. As illustrated inFIG.4, the object detection device200includes a transmitter411as a configuration on the transmission side. Further, the object detection device200includes, as a configuration on the reception side, a receiver421, a preprocessor422, a CFAR processor423, a threshold processor424, a detection processor425, and an estimator426. Further, in the embodiment, at least a part of the configurations illustrated inFIG.4may be implemented as a result of a cooperation between hardware and software, more specifically, as a result of the processor223of the object detection device200reading the computer program from the storage device222and executing the computer program. However, in the embodiment, at least a part of the configurations illustrated inFIG.4may be implemented by dedicated hardware (circuitry). Further, in the embodiment, each configuration illustrated inFIG.4may operate under a control of the controller220of the object detection device200itself, or may operate under a control of the external ECU100. First, the configuration on the transmission side will be described. The transmitter411transmits a transmission wave to an outside including a road surface by vibrating the vibrator211described above at a predetermined transmission interval. The transmission interval is a time interval from the transmission of the transmission wave to a next transmission of the transmission wave. The transmitter411is configured by using, for example, a circuit that generates a carrier wave, a circuit that generates a pulse signal corresponding to identification information to be given to the carrier wave, a multiplier that modulates the carrier wave according to the pulse signal, an amplifier that amplifies a transmission signal output from the multiplier, and the like. Next, the configuration on the reception side will be described. The receiver421receives a reflected wave obtained when the transmission wave transmitted from the transmitter411is reflected by the object as the reception wave, until a predetermined measurement time elapses after the transmission wave is transmitted. The measurement time is a standby time set for receiving the reception wave serving as the reflected wave of the transmission wave after the transmission wave is transmitted. The preprocessor422performs preprocessing for converting a reception signal corresponding to the reception wave received by the receiver421into a processing target signal to be input to the CFAR processor423. The preprocessing includes, for example, amplification processing of amplifying a reception signal corresponding to a reception wave, filter processing of reducing noise included in the amplified reception signal, correlation processing of acquiring a correlation value indicating a similarity degree between the transmission signal and the reception signal, envelope curve processing of generating a signal based on an envelope curve of a waveform indicating a temporal change of the correlation value as a processing target signal, and the like. The CFAR processor423acquires the CFAR signal by performing the CFAR processing on the processing target signal output from the preprocessor422. As described above, the CFAR processing is processing of acquiring a CFAR signal corresponding to a signal obtained by removing the clutter from the processing target signal by using a moving average of a value (signal level) of the processing target signal. For example, the CFAR processor423according to the embodiment acquires a CFAR signal with a configuration as illustrated inFIG.5. FIG.5is an exemplary and schematic diagram illustrating an example of CFAR processing that may be executed in the embodiment. As illustrated inFIG.5, in the CFAR processing, first, a processing target signal510is sampled at a predetermined time interval. Then, an arithmetic unit511of the CFAR processor423calculates a sum of values of the processing target signals for N samples corresponding to the reception waves received in a section T51that is present before a detection timing t50. Further, an arithmetic unit512of the CFAR processor423calculates a sum of values of the processing target signals for N samples corresponding to the reception waves received in a section T52that is present after the detection timing t50. Then, an arithmetic unit520of the CFAR processor423adds the calculation results of the arithmetic units511and512. Then, an arithmetic unit530of the CFAR processor423divides the calculation result of the arithmetic unit520by2N which is a sum of the number N of samples of the processing target signals in the section T51and the number N of samples of the processing target signals in the section T52, and calculates an average value of the values of the processing target signals in both the sections T51and T52. Then, an arithmetic unit540of the CFAR processor423subtracts the average value as the calculation result of the arithmetic unit530from the value of the processing target signal at the detection timing t50and acquires a CFAR signal550. As described above, the CFAR processor423according to the embodiment samples the processing target signal based on the reception wave, and acquires a CFAR signal based on a difference between a value of a first processing target signal for (at least) one sample based on the reception wave received at a predetermined detection timing and an average value of values of second processing target signals for a plurality of samples based on the reception waves received in the predetermined sections T51and T52before and after the detection timing. In the above description, as an example of the CFAR processing, the processing of acquiring the CFAR signal based on the difference between the value of the first processing target signal and the average value of the values of the second processing target signals is illustrated. However, the CFAR processing according to the embodiment may be processing of acquiring a CFAR signal based on a ratio between the value of the first processing target signal and the average value of the values of the second processing target signals, or may be processing of acquiring a CFAR signal based on normalization of the difference between the value of the first processing target signal and the average value of the values of the second processing target signals. Here, as described above, the CFAR signal corresponds to a signal obtained by removing the clutter from a processing target signal. More specifically, the CFAR signal corresponds to a signal obtained by removing various kinds of noise including the clutter and stationary noise generated stationarily in the transceiver210from the processing target signal. The processing target signal and the noise (clutter and stationary noise) are illustrated as waveforms as illustrated inFIG.6below, for example. FIG.6is an exemplary and schematic diagram illustrating an example of the processing target signal and noise according to the embodiment. In the example illustrated inFIG.6, a solid line L601represents a temporal change of the value of the processing target signal, a one-dot chain line L602represents a temporal change of the value of the clutter, and a two-dot chain line L603represents a temporal change of the value of the stationary noise. As illustrated inFIG.6, a magnitude relation between the values of the clutter (see the one-dot chain line L602) and the stationary noise (see the two-dot chain line L603) changes for each section. For example, in the example illustrated inFIG.6, the value of the stationary noise is larger than the value of the clutter in a section T61, the value of the clutter is larger than the value of the stationary noise in a section T62next to the section T61, and the value of the stationary noise is larger than the value of the clutter in a section T63next to the section T62. Further, a timing at which the value of the clutter increases is substantially determined in advance in accordance with an installation position and an installation attitude of the transceiver210. The value of the stationary noise is also substantially determined in advance. Therefore, the sections T61to T63are substantially determined in advance in accordance with the installation position and the installation attitude of the transceiver210. Further, a start point of the section T61coincides with a start point of the measurement time described above as the standby time of the receiver421set for receiving the reception wave serving as the reflected wave of the transmission wave, and an end point of the section T63coincides with an end point of the measurement time described above. Here, in the sections T61and T63, since the clutter is negligibly small with respect to the processing target signal, the CFAR signal corresponding to the signal obtained by removing the clutter and the stationary noise from the processing target signal is a signal including the influences of the absorption and the attenuation of the processing target signal without including the influences of the absorption and the attenuation of the clutter. On the other hand, in the section T62, since the clutter is too large to be negligible with respect to the processing target signal, the CFAR signal is a signal that does not include the influences of the absorption and the attenuation because the influences of the absorption and the attenuation of the clutter and the influences of the absorption and the attenuation of the processing target signal cancel each other out. Then, the detection processor425specifies a detection timing at which the value of the CFAR signal exceeds the threshold based on a comparison between the value of the CFAR signal and the threshold set by the threshold processor424. Since the detection timing at which the value of the CFAR signal exceeds the threshold coincides with a timing at which the signal level of the reception wave serving as the transmission wave reflected by the object and returned reaches a peak, when the detection timing at which the value of the CFAR signal exceeds the threshold is specified, the distance to the object can be detected by the TOF method described above. Returning toFIG.4, the estimator426estimates the absorption and attenuation value corresponding to the average value based on a road surface reflection estimation expression that defines a relation between the average value and the absorption and attenuation value in advance. A road surface reflection estimation expression A is expressed by, for example, the following Equation (1). A=f(α,β)  (1) Here, α is a parameter indicating the absorption, and attenuation value. β is a parameter indicating a magnification in a direction of the amplitude (signal level). f(α, β) is a functional expression having two parameters of α and β, which is created in advance based on experimental results and the like, and includes, for example, an exponential function and a logarithmic function. Further, the estimator426may correct the absorption and attenuation value based on the distance characteristic expression that defines the relation between the absorption and attenuation value and the distance in advance. The distance characteristic expression is created in advance based on experimental results and the like. Further, for example, in the example ofFIG.6, it is preferable that the processing performed by the estimator426be performed on the section T62in which the clutter is too large to be negligible with respect to the processing target signal among the sections T61to T63, but the processing is not limited thereto. FIG.7is an exemplary and schematic graph illustrating a relation between an amplitude and a distance related to a road surface reflection estimation expression according to the embodiment. In the graph ofFIG.7, a vertical axis represents an amplitude (signal level), and a horizontal axis represents a distance. A line L1corresponds to a case where a (absorption and attenuation value)=1. A line L2corresponds to a case where α=1.2. A line L3corresponds to a case where α=1.4. A line L4corresponds to a case where α=1.6. Then, assuming that the result of performing the processing based on the road surface reflection estimation expression by the estimator426is the line L11, α=1.1 can be determined (estimated). The relation between the absorption and attenuation value, the temperature, and the humidity is known. Therefore, when two values of these three values are known, the remaining one value can be calculated. That is, when the absorption and attenuation value is determined, for example, the processor223(calculator) can calculate the humidity of the target environment based on the absorption and attenuation value and the temperature data detected by the temperature sensor that detects the temperature of the target environment. Similarly, the temperature can be calculated based on the absorption and attenuation value and the humidity. The threshold processor424sets the threshold for the CFAR signal by using the absorption and attenuation value estimated by the estimator426. FIG.8is an exemplary and schematic flowchart illustrating a series of processing executed by the object detection system according to the embodiment. As illustrated inFIG.8, in the embodiment, first, in S801, the transmitter411of the object detection device200transmits a transmission wave. Then, in S802, the receiver421of the object detection device200receives a reception wave corresponding to the transmission wave transmitted in S801. Then, in S803, the preprocessor422of the object detection device200executes preprocessing for next processing in S804on a reception signal corresponding to the reception wave received in S802. Then, in S804, the CFAR processor423of the object detection device200executes CFAR processing on the processing target signal output from the preprocessor422through the preprocessing in S803, and generates a CFAR signal. Next, in S805, the estimator426estimates an absorption and attenuation value corresponding to the average value based on the road surface reflection estimation expression. Further, the estimator426may further correct the absorption and attenuation value based on the distance characteristic expression. Next, in S806, the threshold processor424of the object detection device200sets a threshold for the CFAR signal generated in S804by using the absorption and attenuation value estimated in S805. Then, in S807, the detection processor425of the object detection device200detects a distance to an object based on the comparison between a value of the CFAR signal and the threshold set in S805. Then, the processing ends. As described above, according to the object detection device200of the embodiment, in a case of performing the CFAR processing, the absorption and attenuation value corresponding to the average value can be estimated with high accuracy based on the road surface reflection estimation expression. Further, the absorption and attenuation value can be corrected to a more accurate value based on the distance characteristic expression. Further, when temperature data exists, the humidity can be calculated with high accuracy based on the temperature data and the absorption and attenuation value. Therefore, it is not necessary to use a worst value as the absorption and attenuation value as in the related art, and the humidity estimation, the threshold setting of the CFAR signal, the distance measurement, and the like can be performed with high accuracy by using the absorption and attenuation value estimated with high accuracy. Further, the absorption and attenuation value can also be applied to correct the absorption and attenuation at a time of identifying steps on the road surface. <Modification> In the embodiment described above, the technique disclosed here is applied to a configuration in which a distance to an object is detected by a transmission and reception of an ultrasonic wave. However, the technique disclosed here can also be applied to a configuration in which a distance to an object is detected by a transmission and reception of a wave other than the ultrasonic wave, such as a sound wave, a millimeter wave, a radar, and an electromagnetic wave. An object detection device according to an aspect of this disclosure includes: a transmitter configured to transmit a transmission wave to an outside including a road surface; a receiver configured to receive a reflected wave of the transmission wave being reflected by an object as a reception wave; a CFAR processor configured to acquire a CFAR signal at a predetermined detection timing by CFAR processing based on a value of a first processing target signal based on a reception wave received at the detection timing and an average value of values of second processing target signals based on the reception waves received in predetermined sections before and after the detection timing; and an estimator configured to estimate an absorption and attenuation value corresponding to the average value based on a road surface reflection estimation expression that defines a relation between the average value and the absorption and attenuation value in advance. With such a configuration, in a case of performing the CFAR processing, the absorption and attenuation value corresponding to the average value can be estimated with high accuracy based on the road surface reflection estimation expression. Further, in the object detection device described above, the estimator may further correct the absorption and attenuation value based on a distance characteristic expression that defines a relation between the absorption and attenuation value and a distance in advance. With such a configuration, the absorption and attenuation value can be corrected to a more accurate value based on the distance characteristic expression. The object detection device may further include a calculator configured to calculate humidity of a target environment based on the absorption and attenuation value and temperature data detected by a temperature sensor configured to detect a temperature of the target environment. With such a configuration, when temperature data exists, the humidity can be calculated with high accuracy based on the temperature data and the absorption and attenuation value. While embodiments and modifications disclosed here have been described, these embodiments and modifications have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, these embodiments and modifications described herein may be embodied in a variety of forms; furthermore, various omissions, substitutions and changes in the form of these embodiments and modifications described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions. The principles, preferred embodiment and mode of operation of the present invention have been described in the foregoing specification. However, the invention which is intended to be protected is not to be construed as limited to the particular embodiments disclosed. Further, the embodiments described herein are to be regarded as illustrative rather than restrictive. Variations and changes may be made by others, and equivalents employed, without departing from the spirit of the present invention. Accordingly, it is expressly intended that all such variations, changes and equivalents which fall within the spirit and scope of the present invention as defined in the claims, be embraced thereby.
33,998
11860275
DETAILED DESCRIPTION Hereinafter, embodiments disclosed here will be described with reference to the drawings. Configurations of the embodiments described below and operations and effects provided by the configurations are merely examples, and this disclosure is not limited to the following description. First Embodiment FIG.1is a top view showing an example of a configuration of a vehicle1according to a first embodiment. The vehicle1is an example of a movable body on which an object detection device according to the present embodiment is mounted. The object detection device according to the present embodiment is a device that detects an object (other vehicles, a structure, a pedestrian, or the like) present around the vehicle1based on TOF, Doppler shift information or the like acquired by transmitting ultrasonic waves from the vehicle1and receiving reflected waves from the object. The object detection device according to the present embodiment includes a plurality of transmission and reception units21A to21H (hereinafter, referred to as a transmission and reception unit21when it is not required to distinguish the plurality of transmission and reception units21A to21H). All of the transmission and reception units21are provided on a vehicle body2serving as an exterior of the vehicle1, transmit ultrasonic waves (transmission waves) to the outside of the vehicle body2, and receive reflected waves from an object present outside the vehicle body2. In the example shown inFIG.1, four transmission and reception units21A to21D are disposed at a front end portion of the vehicle body2, and four transmission and reception units21E to21H are disposed at a rear end portion. The number of transmission and reception units21and positions where the transmission and reception units21are provided are not limited to the above example. FIG.2is a block diagram showing an example of a configuration of a vehicle control device50according to the first embodiment. The vehicle control device50(an example of a movable body control device) performs a process for controlling the vehicle1based on information output from an object detection device200. The vehicle control device50according to the present embodiment includes an ECU100and the object detection device200. The object detection device200includes the plurality of transmission and reception units21and a control unit220. Each of the transmission and reception units21includes a vibrator211configured using a piezoelectric element, an amplifier, and the like, and achieves transmission and reception of the ultrasonic waves by vibration of the vibrator211. Specifically, the transmission and reception unit21transmits, as the transmission waves, ultrasonic waves generated in response to the vibration of the vibrator211, and detects vibration of the vibrator211caused by reflected waves that are generated by an object O reflecting the transmission waves. The vibration of the vibrator211is converted into an electric signal, and it is possible to acquire, based on the electric signal, a TOF corresponding to a distance from the transmission and reception unit21to the object O, Doppler shift information corresponding to a relative speed of the object O, and the like. The transmission and reception unit21according to the embodiment transmits transmission waves including ultrasonic waves having directivity in a direction parallel or substantially parallel to a traveling direction of the vehicle1. The transmission waves include ultrasonic waves (a non-directional component) traveling downward in a vertical direction from the transmission and reception unit21. The transmission waves will be described below. In the example shown inFIG.2, a configuration in which both the transmission of the transmission waves and the reception of the reflected waves are performed by a single vibrator211is shown, but the configuration of the transmission and reception unit21is not limited thereto. For example, the configuration may be a configuration in which a transmission side and a reception side are separated, such as a configuration in which a vibrator for transmitting the transmission waves and a vibrator for receiving the reflected waves are separately provided. The control unit220includes an input and output device221, a storage device222, and a processor223. The input and output device221is an interface device for implementing transmission and reception of information between the control unit220and an external mechanism (the transmission and reception unit21, the ECU100or the like). The storage device222includes a main memory device such as a read only memory (ROM) and a random access memory (RAM), and an auxiliary storage device such as a hard disk drive (HDD) and a solid state drive (SSD). The processor223is an integrated circuit that executes various processes for achieving a function of the control unit220, and includes, for example, a central processing unit (CPU) that operates according to a program, an application specific integrated circuit (ASIC) designed for a specific application, and the like. The processor223executes various arithmetic processes and control processes by reading and executing programs stored in the storage device222. The ECU100is a unit that performs various processes for controlling the vehicle1based on various pieces of information acquired from the object detection device200or the like. The ECU100includes an input and output device110, a storage device120, and a processor130. The input and output device110is an interface device for implementing transmission and reception of information between the ECU100and an external mechanism (the object detection device200, a drive mechanism, a brake mechanism, a steering mechanism, a transmission mechanism, an in-vehicle display, a speaker or the like). The storage device120includes a main memory device such as a ROM and a RAM, and an auxiliary storage device such as an HDD and an SSD. The processor130is an integrated circuit that executes various processes for achieving a function of the ECU100, and includes, for example, a CPU and an ASIC. The processor130executes various arithmetic processes and control processes by reading and executing programs stored in the storage device120. FIG.3is a block diagram showing an example of a function configuration of the object detection device200according to the first embodiment. The object detection device200according to the present embodiment includes a signal processing unit302, an object detection unit303, an abnormality determination unit304(a determination unit), a reference information holding unit305, and an output unit306. These functional components302to306are implemented by cooperation of hardware components of the object detection device200shown inFIG.2and software components such as firmware and programs. The signal processing unit302processes signals acquired by the transmission and reception unit21and generates various kinds of data. The signal processing unit302performs, for example, an amplification process, a filter process, and an envelope cure process on the electric signal corresponding to the vibration of the vibrator211, and generates envelope curve data or the like indicating a change over time in an intensity (amplitude) of the ultrasonic waves transmitted and received by the transmission and reception unit21. A TOF corresponding to the object present around the vehicle1can be detected and a distance from the vehicle1(the transmission and reception unit21) to the object can be calculated based on the envelope curve data. The object detection unit303detects the object (for example, other vehicles, a structure, a pedestrian or the like) present around the vehicle1based on the data generated by the signal processing unit302, and generates object information regarding the object. The object information may include, for example, the distance from the vehicle1(the transmission and reception unit21) to the object, a relative velocity of the object, and a type of the object. The abnormality determination unit304determines presence or absence of an abnormality based on a predetermined reference distance and a downward distance between the transmission and reception unit21and an object present below the transmission and reception unit21in a vertical direction, the downward distance is calculated based on reflected waves of ultrasonic waves of the transmission waves traveling downward in the vertical direction from the transmission and reception unit21. The ultrasonic waves traveling downward in the vertical direction from the transmission and reception unit21correspond to the non-directional component traveling in a direction other than the direction (the direction parallel or substantially parallel to the traveling direction of the vehicle1) corresponding to the directivity in components of the transmission waves transmitted from the transmission and reception unit21. An abnormality in this case may be intrusion of an object (for example, a child, or an animal) between the transmission and reception unit21and the road surface. In addition, the abnormality determination unit304determines that there is an abnormality when the intensity of reflected waves corresponding to the reference distance or shorter does not reach the predetermined reference intensity. An abnormality in this case may be a malfunction (for example, a state in which the ultrasonic waves cannot be appropriately transmitted and received due to adhesion of dirt or the like) of the transmission and reception unit21, or the like. In addition, the abnormality determination unit304determines the presence or absence of abnormality before the stopped (parked) vehicle1starts moving. The reference information holding unit305holds the reference distance and the reference intensity used for the abnormality determination performed by the abnormality determination unit304. The reference information holding unit305holds, as the reference distance, a distance corresponding to the height of the transmission and reception unit21from the road surface. In addition, the reference information holding unit305may set the downward distance calculated when the vehicle1is stopped (parked) as the reference distance. In addition, the reference information holding unit305holds an intensity of reflected waves corresponding to the reference distance as the reference intensity. The reference distance and the reference intensity are stored in a storage device, and are read when abnormality determination is to be performed by the abnormality determination unit304(for example, when the vehicle1transitions from a stopped state to a moving-start state). The output unit306outputs the object information regarding the object detected by the object detection unit303, the abnormality information regarding the abnormality determined by the abnormality determination unit304, and the like. The abnormality information and the like is output to, for example, the ECU100, and is used for control of the vehicle1(for example, warning to an occupant, traveling restriction process and the like). FIG.4is a diagram showing an envelope curve illustrating an overview of an object detection method using the TOF in the first embodiment.FIG.4illustrates an envelope cure showing the change over time in the intensity of the ultrasonic waves transmitted and received by the transmission and reception unit21. In the graph shown inFIG.4, a horizontal axis corresponds to a time (the TOF), and a vertical axis corresponds to the intensity of ultrasonic waves transmitted and received by the transmission and reception unit21. A solid line L11represents an example of the envelope cure indicating the change over time in the intensity indicating magnitude of the vibration of the vibrator211. It can be seen from the solid line L11that the vibrator211is driven to vibrate only for a time Ta from a timing t0, transmission of the transmission waves is completed at a timing t1, and then the vibration of the vibrator211due to inertia continues while being attenuated during a time Tb from the timing t1to a timing t2. Therefore, in the graph shown inFIG.4, the time Tb corresponds to reverberation time. The solid line L11reaches a peak, at which the magnitude of the vibration of the vibrator211exceeds (or equal to or higher than) a predetermined threshold value represented by a one-dot chain line L21, at a timing t4at which a time Tp elapses from the timing t0at which the transmission of the transmission waves is started. The threshold value is a value which is preset to identify whether the vibration of the vibrator211is caused by the reception of reflected waves from the object to be detected, or caused by the reception of reflected waves from an object not to be detected. Here, although the threshold value represented by the one-dot chain line L21is shown as a constant value, the threshold value may be a variable value that changes depending on an elapse of time, situations or the like. Vibration having a peak exceeding (or equal to or higher than) the threshold value represented by the one-dot chain line L21can be regarded as being caused by the reception of reflected waves from the object to be detected. On the other hand, vibration having a peak equal to or lower than (or less than) the threshold value can be regarded as being caused by the reception of reflected waves from the object not to be detected. Therefore, it can be seen from the solid line L11that the vibration of the vibrator211at the timing t4is caused by the reception of reflected waves from the object to be detected. In the solid line L11, the vibration of the vibrator211is attenuated after the timing t4. Therefore, the timing t4corresponds to a timing at which the reception of reflected waves from the object to be detected is completed, in other words, a timing at which transmission waves last transmitted at the timing t1are returned as the reflected waves. In addition, in the solid line L11, a timing t3serving as a start point of the peak at the timing t4corresponds to a timing at which the reception of reflected waves from the object to be detected starts, in other words, a timing at which transmission waves first transmitted at the timing t0are returned as the reflected waves. Therefore, a time ΔT between the timing t3and the timing t4is equal to the time Ta serving as a transmission time of the transmission waves. Based on the above, in order to obtain the distance to the object by using the TOF, it is necessary to obtain a time Tf between the timing t0at which the transmission waves start to be transmitted and the timing t3at which the reflected waves start to be received. The time Tf can be obtained by subtracting the time ΔT, that is equal to the time Ta serving as the transmission time of the transmission waves, from the time Tp which is a difference between the timing t0and the timing t4at which the intensity of the reflected waves exceeds the threshold value and reaches the peak. The timing t0at which the transmission waves start to be transmitted can be easily specified as a timing at which the object detection device200starts operating, and the time Ta serving as the transmission time of the transmission waves is predetermined by a setting or the like. Therefore, the distance to the object to be detected can be obtained by specifying the timing t4at which the intensity of the reflected waves exceeds the threshold value and reaches the peak. FIG.5is a diagram showing an example of characteristics of the ultrasonic waves transmitted and received by the transmission and reception unit21according to the present embodiment. The transmission waves transmitted from the transmission and reception unit21according to the present embodiment includes transmission waves Wt1having directivity in the direction parallel or substantially parallel to the traveling direction of the vehicle1and transmission waves Wt2traveling downward in the vertical direction from the transmission and reception unit21. The transmission waves Wt2correspond to the above non-directional component. The direction parallel or substantially parallel to the traveling direction of the vehicle1includes a forward direction, a backward direction, a vehicle width direction, and the like. The transmission waves transmitted from the transmission and reception unit21may be an ultrasonic wave that includes the transmission waves Wt1as a main lobe and includes the transmission waves Wt2as a side lobe. The transmission and reception unit21receives reflected waves Wr1generated by an object (for example, other vehicles, a structure, a pedestrian, or the like) present in the direction parallel or substantially parallel to the traveling direction of the vehicle1reflecting the transmission waves Wt1. In addition, the transmission and reception unit21receives reflected waves Wr2generated by an object (for example, a road surface G, an object entering between the transmission and reception unit21and the road surface G) present below the transmission and reception unit21in the vertical direction reflecting the transmission waves Wt2. When no object presents between the transmission and reception unit21and the road surface G, a TOF corresponding to a distance D between the transmission and reception unit21and the road surface G is detected. The distance D is an example of the above reference distance. FIG.6is a graph showing an example of the envelope curve detected in a normal state in the first embodiment. InFIG.6, a horizontal axis corresponds to an elapsed time from when the transmission waves Wt1and Wt2are transmitted, and a vertical axis corresponds to the intensity of the ultrasonic waves transmitted and received by the transmission and reception unit21. In addition,FIG.6shows a threshold value A1and a threshold value A2. The threshold value A1is a threshold value set for removing noise caused by a structure or the like of the transmission and reception unit21. The threshold value A2is a threshold value for detecting a peak corresponding to the reference distance (the distance D in this embodiment) by the reflected waves Wr2of the transmission waves Wt2traveling downward in the vertical direction from the transmission and reception unit21. The threshold value A2is preferably a value lower than a threshold value (for example, the threshold value indicated by the one-dot chain line L21inFIG.4) for detecting a normal object to be detected (other vehicles, a structure, a pedestrian, and the like). As shown inFIG.6, in the normal state (in a case where there is no abnormality such as intrusion of an object below the vehicle1, the malfunction of the transmission and reception unit21, or the like), a peak is detected at a timing ts (TOF: ts−t0) corresponding to the distance D. The timing ts (TOF: ts−t0) may be stored in the storage device in advance as a known value, or may be measured at a predetermined timing (for example, a timing when parking of the vehicle1is completed or before the vehicle1starts moving). That is, when a peak having an intensity exceeding the threshold value A2is detected at the timing ts corresponding to the distance D serving as the reference distance, a normal state can be determined. FIG.7is a graph showing an example of the envelope curve detected when an object enters between the transmission and reception unit21and the road surface G in the first embodiment. As shown inFIG.7, when an object presents between the transmission and reception unit21and the road surface G, a peak is detected at a timing tu before the timing ts corresponding to the distance D. At this time, a time difference Δt=ts−tu corresponds to a height of the object from the road surface G. Thus, when the peak having an intensity exceeding the threshold value A2is detected at the timing tu before the timing ts corresponding to the distance D serving as the reference distance, a state (a state of intrusion of an object below the vehicle1) in which there is an abnormality can be determined. FIG.8is a graph showing an example of the envelope curve detected when a malfunction occurs in the transmission and reception unit21in the first embodiment. As shown inFIG.8, when a malfunction (adhesion of dirt or the like) occurs in the transmission and reception unit21, a peak of the intensity reaching the threshold value A2(an example of the reference intensity) is not detected in a period of time (t0to ts) before the timing ts corresponding to the distance D. That is, when a peak having the intensity reaching the threshold value A2is not detected in the period of time t0to ts corresponding to not more than the distance D serving as the reference distance, it can be determined that there is an abnormality (a state in which a malfunction occurs in the transmission and reception unit21, or the like). FIG.9is a flowchart showing an example of a process performed by the object detection device200according to the first embodiment. When an ignition power supply of the vehicle1is turned on (S101), the transmission and reception unit21performs transmission and reception of the ultrasonic waves (the transmission waves Wt1and Wt2, and the reflected waves Wr1and Wr2) once or more times (S102), and the signal processing unit302measures, based on envelope cure data or the like obtained based on a transmission and reception result of the ultrasonic waves, a downward distance, that is based on the reflected waves Wr2, and an intensity of the reflected waves Wr2(S103). The abnormality determination unit304determines, based on the measurement result, whether a peak exceeding the threshold value A2is detected before the timing ts (S104). When the peak exceeding the threshold value A2is detected before the timing ts (S104: Yes), the abnormality determination unit304determines that there is an abnormality such as intrusion of an object below the vehicle1, and the output unit306outputs abnormality information indicating the abnormality to the ECU100or the like (S105). Next, the abnormality determination unit304determines, based on the measurement result, whether a peak having an intensity exceeding the threshold value A2is detected in the period of time t0to ts (S106). When no peak exceeding the threshold value A2is detected before the timing ts (S104: No), step S106is executed without executing step S105. When no peak having the intensity exceeding the threshold value A2is detected in the period of time t0to ts (S106: No), the abnormality determination unit304determines that there is an abnormality (adhesion of dirt) such as a malfunction of the transmission and reception unit21, and the output unit306outputs abnormality information indicating the abnormality to the ECU100or the like (S107). When a peak having the intensity exceeding the threshold value A2is detected in the period of time t0to ts (S106: Yes), the routine ends without executing step S107. A case in which the reference distance is the distance D between the transmission and reception unit21and the road surface G is described in the above, but the reference distance is not limited thereto. FIG.10is a flowchart showing an example of a method of setting the reference distance according to the first embodiment. Before parking of the vehicle1is completed, the transmission and reception unit21performs the transmission and reception of ultrasonic waves a plurality of times (S201). The signal processing unit302calculates, based on transmission and reception results of the ultrasonic waves, an average downward distance which is an average value of a plurality of downward distances and an average intensity which is an average value of the intensities of the reflected waves Wr2corresponding to the plurality of downward distances, and the reference information holding unit305stores the average downward distance and the average intensity in the storage device (S202). When the parking is completed, the ignition power supply is turned off (S203). Next, when the ignition power supply is turned on at the time when the vehicle1starts moving, the reference information holding unit305reads the average downward distance and the average intensity stored in the storage device (S204), sets the average downward distance as the reference distance (S205), and sets the threshold value A1and the threshold value A2based on the average intensity (S206). At this time, the threshold value A1is set for removing low-intensity noise caused by the structure or the like of the transmission and reception unit21. The threshold value A2is set so as to enable detection of the reflected wave Wr2from an object (an object present below the transmission and reception unit21in the vertical direction) corresponding to the average downward distance. The object corresponding to the average downward distance is the road surface G in many cases, but may be a curb, a parking block, or the like. An abnormality can be detected based on a state before parking by setting the reference distance and the reference intensity as described above. A case of using the average values obtained by performing the transmission and reception of the ultrasonic waves a plurality of times is described in the above, but a downward distance and an intensity obtained by transmitting and receiving the ultrasonic waves once may be set as the reference downward distance and the reference intensity. The program for causing a computer (for example, the processor223of the control unit220and the processor130of the ECU100) to execute the processes for achieving the various functions in the above embodiment can be provided by being recorded as an installable or executable format file in a computer-readable recording medium such as a CD (compact disc)-ROM, a flexible disc (FD), a CD-R (recordable), or a digital versatile disk (DVD). Further, the program may be provided or distributed via a network such as the Internet. According to the above embodiment, by using the non-directional component (the transmission waves Wt2and the reflected waves Wr2) included in the ultrasonic waves that are transmitted to detect an object present around the vehicle1and have the directivity, it is possible to detect an abnormality such as intrusion of an object below the vehicle1, a malfunction of the transmission and reception unit21. Accordingly, an abnormality can be detected without adding a separate sensor. Hereinafter, another embodiment will be described with reference to a drawing, and the same reference numerals are given to portions having the same or similar operations and effects as those of the first embodiment, and the description thereof may be omitted. Second Embodiment FIG.11is a block diagram showing an example of a function configuration of an object detection device500according to a second embodiment. The object detection device500according to the present embodiment is different from the object detection device200according to the first embodiment in that the object detection device500includes a directivity change unit511. The directivity change unit511according to the present embodiment changes the directivity of the transmission waves (one or both of the transmission waves Wt1and the transmission waves Wt2shown inFIG.5) transmitted from the transmission and reception unit21. A method of changing the directivity of the transmission waves is not particularly limited, and for example, a method of adjusting an electrical application property with respect to a piezoelectric element constituting the vibrator211according to a desired directivity can be adopted. According to the above configuration, accuracy of detection of an object and accuracy of detection of an abnormality can be improved in various situations. An object detection device as an example of this disclosure includes: a transmission and reception unit configured to transmit a transmission wave including an ultrasonic wave having directivity in a direction parallel or substantially parallel to a traveling direction of a movable body, and receive a reflected wave from an object; a determination unit configured to determine presence or absence of an abnormality based on a predetermined reference distance and a downward distance between the transmission and reception unit and an object present below the transmission and reception unit in a vertical direction, the downward distance being calculated based on a reflected wave of an ultrasonic wave of the transmission wave traveling downward in the vertical direction from the transmission and reception unit; and an output unit configured to output information regarding the abnormality. According to the above configuration, the abnormality can be detected by using the ultrasonic wave (a non-directional component) traveling downward in the vertical direction from the transmission and reception unit. The determination unit may be configured to determine the presence or absence of the abnormality before the stopped movable body starts moving. Accordingly, the abnormality (for example, intrusion of an object below a vehicle, a malfunction of the transmission and reception unit, or the like) generated when the movable body is stopped can be detected before the movable body starts moving. The reference distance may be a distance corresponding to a height of the transmission and reception unit from a road surface. Accordingly, intrusion of an object between the transmission and reception unit and the road surface can be detected. The reference distance may be the downward distance calculated when the movable body is stopped. Accordingly, the abnormality can be detected by comparing states below the transmission and reception unit when the movable body is stopped and started. The determination unit may be configured to determine that there is an abnormality when the downward distance is shorter than the reference distance. Accordingly, intrusion of the object between the movable body and the road surface can be detected. The determination unit may be configured to determine that there is an abnormality when an intensity of the reflected wave corresponding to the reference distance or shorter does not reach a predetermined reference intensity. Accordingly, the malfunction (for example, a state in which the ultrasonic wave cannot be appropriately transmitted and received due to adhesion of dirt or the like) of the transmission and reception unit can be detected. The transmission wave may include, as a main lobe, the ultrasonic wave traveling in the direction parallel or substantially parallel to the traveling direction of the movable body, and may include, as a side lobe, the ultrasonic wave traveling downward in the vertical direction from the transmission and reception unit. Accordingly, the abnormality can be detected by effectively using the side lobe (an example of the non-directional component) of the ultrasonic wave. The object detection device may further include a directivity change unit configured to change the directivity. Accordingly, accuracy of detection of the object or the abnormality can be improved in various situations. In the object detection device, it may be determined that there is no abnormality when the downward distance between the transmission and reception unit and the object present below the transmission and reception unit in the vertical direction is shorter than the predetermined reference distance and is larger than a predetermined threshold value for determining that the object is a step that allows the movable body to climb over, and the information regarding the abnormality may not be output. The predetermined threshold value may be variably set in consideration of an inclination of a movable object in a loading state. A movable body control device as an example of this disclosure includes: the object detection device described above; and a control device configured to perform a process for controlling a movable body based on the information regarding the abnormality output from the object detection device. Accordingly, the movable body can be controlled based on the abnormality detected by the object detection device described above. Although the embodiments of this disclosure are described above, the embodiments described above and modifications thereof are presented by way of example only and are not intended to limit the scope of the invention. The novel embodiments and modifications thereof described above may be embodied in a variety of forms; furthermore, various omissions, substitutions, and changes in the form of the novel embodiments and modifications thereof may be made without departing from the gist of the invention. The embodiments and modifications thereof described above are included in the scope and gist of the invention, and are also included in the inventions described in the claims and their equivalents. The principles, preferred embodiment and mode of operation of the present invention have been described in the foregoing specification. However, the invention which is intended to be protected is not to be construed as limited to the particular embodiments disclosed. Further, the embodiments described herein are to be regarded as illustrative rather than restrictive. Variations and changes may be made by others, and equivalents employed, without departing from the spirit of the present invention. Accordingly, it is expressly intended that all such variations, changes and equivalents which fall within the spirit and scope of the present invention as defined in the claims, be embraced thereby.
33,721
11860276
The detailed description explains embodiments of the invention, together with advantages and features, by way of example with reference to the drawings. DETAILED DESCRIPTION Embodiments of the present invention provide for an optical measurement device that may operate as either a laser tracker or a laser scanner. This provides advantages in allowing either a higher accuracy measurement using a cooperative target, usually handheld by an operator, or a faster (usually) lower accuracy measurement, usually without the active assistance of an operator. These two modes of operation are provided in a single integrated device. Referring now toFIGS.1-2, an optical measurement device30is shown that provides for multiple modes of operation. The device30has a housing32containing tracker portion34to support laser tracking functionality and a scanner portion36to support scanner functionality. An exemplary gimbaled beam steering mechanism38includes a zenith carriage42mounted on an azimuth base40and rotated about an azimuth axis44. A payload structure46is mounted on the zenith carriage42, which rotates about a zenith axis48. The zenith axis48and the azimuth axis44intersect orthogonally, internally to device30, at the gimbal point50. The gimbal point50is typically the origin for distance and angle measurements. One or more beams of light52virtually pass through the gimbal point50. The emerging beams of light are pointed in a direction orthogonal to zenith axis48. In other words, the beam of light52lies in a plane that is approximately perpendicular to the zenith axis48and that contains the azimuth axis44. The outgoing light beam52is pointed in the desired direction by rotation of payload structure46about a zenith axis48by rotation of zenith carriage40about the azimuth axis44. A zenith motor51and zenith angular encoder54are arranged internal to the housing32and is attached to the zenith mechanical axis aligned to the zenith axis48. An azimuth motor55and angular encoder56are also arranged internal to the device30and is attached to an azimuth mechanical axis aligned to the azimuth axis44. The zenith and azimuth motors51,55operate to rotate the payload structure46about the axis44,48simultaneously. As will be discussed in more detail below, in scanner mode the motors51,55are each operated in a single direction which results in the scanner light following a continuous pathway that does not reverse direction. The zenith and azimuth angular encoders measure the zenith and azimuth angles of rotation to relatively high accuracy. The light beam52travels to target58which reflects the light beam53back toward the device30. The target58may be a noncooperative target, such as the surface of an object59for example. Alternatively, the target58may be a retroreflector, such as a spherically mounted retroreflector (SMR) for example. By measuring the radial distance between gimbal point50and target58, the rotation angle about the zenith axis48, and the rotation angle about the azimuth axis44, the position of the target58may be found within a spherical coordinate system of the device30. As will be discussed in more detail herein, the device30includes one or more mirrors, lenses or apertures that define an optical delivery system that directs and receives light. The light beam52may include one or more wavelengths of light, such as visible and infrared wavelengths for example. It should be appreciated that, although embodiments herein are discussed in reference to the gimbal steering mechanism38, other types of steering mechanisms may be used. In other embodiments a mirror may be rotated about the azimuth and zenith axes for example. In other embodiments, galvo mirrors may be used to steer the direction of the light. Similar to the exemplary embodiment, these other embodiments (e.g. galvo mirrors) may be used to steer the light in a single direction along a pathway without reversing direction as is discussed in more detail below. In one embodiment, magnetic nests60may be arranged on the azimuth base40. The magnetic nests60are used with the tracker portion34for resetting the tracker to a “home position” for different sized SMRs, such as 1.5, ⅞ and 0.5 inch SMRs. And on-device retroreflector62may be used to reset the tracker to a reference distance. Further, a mirror (not shown) may be used in combination with the retroreflector62to enable performance of self-compensation, as described in U.S. Pat. No. 7,327,446, the contents of which are incorporated by reference. Referring now toFIG.3, an exemplary controller64is illustrated for controlling the operation of the device30. The controller64includes a distributed processing system66, processing systems for peripheral elements68,72, computer74and other network components76, represented here is a cloud. Exemplary embodiments of distributed processing system66includes a master processor78, payload function electronics80, azimuth encoder electronics82, zenith encoder electronics86, display and user interface (UI)88, removable storage hardware90, radio frequency identification (RFID) electronics92, and antenna94. The payload function electronics80includes a number of functions such as the scanner electronics96, the camera electronics98(for camera168,FIG.11), the ADM electronics100, the position detector (PSD) electronics102, and the level electronics104. Some or all of the sub functions in payload functions electronics80have at least one processing unit, which may be a digital signal processor (DSP) or a field programmable gate array (FPGA), for example. Many types of peripheral devices are possible, such as a temperature sensor68and a personal digital assistant72. The personal digital assistant72may be a cellular telecommunications device, such as a smart phone for example. The device30may communicate with peripheral devices in a variety of means, including wireless communication over antenna94, by means of vision system such as a camera, and by means of distance and angular readings of the laser tracker to a cooperative target. Peripheral devices may contain processors. Generally, when the term scanner processor, laser tracker processor or measurement device processor is used, it is meant to include possible external computer and cloud support. In an embodiment, a separate communications medium or bus goes from the processor78to each of the payload function electronics units80,82,86,88,90,92. Each communications medium may have, for example, three serial lines that include the data line, clock line, and frame line. The frame line indicates whether or not the electronics unit should pay attention to the clock line. If it indicates that attention should be given, the electronics unit reads the current value of the data line at each clock signal. The clock signal may correspond, for example, to a rising edge of a clock pulse. In one embodiment, information is transmitted over the data line in the form of a packet. In other embodiments, each package includes an address, a numeric value, a data message, and a checksum. The address indicates where, within the electronics unit, the data messages are to be directed. The location may, for example, correspond to a processor subroutine within the electronics unit. The numeric value indicates the length of the data message. The data message contains data or instructions for the electronics units to carry out. The checksum is a numeric value that is used to minimize the chance of errors in data transmitted over the communications line. In an embodiment, the processor78transmits packets of information over the bus106to payload functions electronics80, over bus108to azimuth encoder electronics82, over bus110to zenith encoder electronics86, over bus112to display and UI electronics88, over bus114to removable storage hardware90, and over bus116to RFID and wireless electronics92. In an embodiment, the processor78also sends a synchronization pulse over the synch bus118to each of the electronic units at the same time. The synch pulse provides a way of synchronizing values collected by the measurement functions of the device30. For example, the azimuth encoder electronics82in the zenith electronics86latch their encoder values as soon as the synch pulse is received. Similarly, the payload function electronics80latch the data collected by the electronics contained within the payload structure. The ADM and position detector all latch data when the synch pulse is given. In most embodiments, the camera and inclinometer collect data at a slower rate than the synch pulse rate but may latch data at multiples of the synch period. In one embodiment, the azimuth encoder electronics82and the zenith encoder electronics86are separated from one another and from the payload function electronics80by slip rings (not shown). Where slip rings are used, the bus lines106,108,110may be separate buses. The optical electronics processing system66may communicate with an external computer74, or may provide communication, display, and UI functions within the device30. The device30communicates with computer74over communications link120, such as an Ethernet line or a wireless connection, for example. The device30may also communicate with other elements, represented by cloud76, over communications link122, which might include one or more electrical cables, such as Ethernet cables for example, or one or more wireless connections. The element76may be another three-dimensional test instrument for example, such as an articulated arm CMM, which may be relocated by device30. A communication link124between the computer74and the element76may be wired or wireless. An operator sitting on a remote computer74may make a connection to the Internet, represented by cloud76, over an Ethernet or wireless link, which in turn connects them to processor78over an Ethernet or wireless link. In this way, user may control the action of a remote device, such as a laser tracker. Referring now toFIG.4, an embodiment of payload structure46within a device30is shown having a tracker portion34and a scanner portion36. The portions34and36are integrated to emit light from the tracker and scanner portions over a substantially common optical inner beam path, which is represented inFIGS.1and12-14by the beam of light52. However, although the light emitted by the tracker and scanner portions travel over a substantially common optical path, in an embodiment, the beams of light from the tracker and scanner portions are emitted at different times. In another embodiment, the beams are emitted at the same time. The tracker portion34includes a light source126, an isolator128, a fiber network136, ADM electronics140, a fiber launch130, a beam splitter132, and a position detector134. In an embodiment, the light source126is emits visible light. The light source may be, for example, a red or green diode laser or a vertical cavity surface emitting laser. The isolator may be a Faraday isolator, and attenuator, or any other suitable device capable of sufficiently reducing the amount of light transmitted back into the light source126. Light from the isolator128travels into the fiber network136. In one embodiment, the fiber network136is the fiber network shown inFIG.6as will be discussed in more detail below. The position detector134is arranged to receive a portion of the radiation emitted by the light source126and reflected by the target58. The position detector134is configured to provide a signal to the controller64. The signal is used by the controller64to activate the motors51,55to steer the light beam52to track the target58. Some of light entering the fiber network136is transmitted over optical fiber138to the reference channel of the ADM electronics140. Another portion of the light entering fiber network136passes through the fiber network136and the beam splitter132. The light arrives at a dichroic beam splitter142, which is configured to transmit light at the wavelength of the ADM light source. The light from the tracker portion34exits the payload structure46via an aperture146along optical path144. The light from the tracker portion34travels along optical path144, is reflected by the target58, and returns along the optical path144to re-enter the payload structure46through the aperture146. This returning light passes through dichroic beam splitter142and travels back into the tracker portion34. A first portion of the returning light passes through the beam splitter132and into fiber launch130and into the fiber network136. Part of the light passes into optical fiber148and passes into the measure channel of the ADM electronics140. A second portion of the returning light is reflected off of the beam splitter132and into position detector134. In one embodiment, the ADM electronics140is that shown inFIG.5. The ADM electronics140includes a frequency reference3302, a synthesizer3304, a measure detector3306, a reference detector3308, a measure mixer3310, a reference mixer3312, conditioning electronics3314,3316,3318,3320, a divide-by-N prescaler3324, and an analog-to-digital converter (ADC)3322. The frequency reference, which may be an oven controlled crystal oscillator for example, sends a reference frequency fREF, such as 10 MHz for example, to the synthesizer which generates two electrical signals—one signal at frequency fRFand two signals at frequency fLO. The signal fRFgoes to the light source126. The two signals at frequency fLOgo to the measure mixer3310and the reference mixer3312. The light from optical fibers138,148enter the reference and measure channels respectively. Reference detector3308and measure detector3306convert the optical signals into electrical signals. These signals are conditioned by electrical components3316,3314, respectively, and are sent to mixers3312,3310respectively. The mixers produce a frequency fIFequal to the absolute value of fLO-fRF. The signal fRFmay be a relatively high frequency, such as 2 GHz, while the signal fIFmay have a relatively low frequency, such as 10 kHz. The reference frequency fREFis sent to the prescaler3324, which divides the frequency by an integer value. For example, a frequency of 10 MHz might be divided by 40 to obtain an output frequency of 250 kHz. In this example, the 10 kHz signals entering the ADC3322would be sampled at a rate of 250 kHz, thereby producing 25 samples per cycle. The signals from ADC3322are sent to a data processor3400, such as one or more digital signal processors for example. The method for extracting a distance is based on the calculation of phase of the ADC signals for the reference and measure channels. This method is described in detail in U.S. Pat. No. 7,701,559 ('559 patent) to Bridges et al., the contents of which are herein incorporated by reference. The calculation includes the use of equations (1)-(8) of the '599 patent. In addition, when the ADM first begins to measure a target, the frequencies generated by the synthesizer are changed some number of times (for example, three times), and the possible ADM distances are calculated in each case. By comparing the possible ADM distances for each of the selected frequencies, an ambiguity in the ADM measurement is removed. The equations (1)-(8) of the '599 patent combined with synchronization methods and Kalman filter methods described in the '599 patent enable the ADM to measure a moving target. In other embodiments, other methods of obtaining absolute distance measurements may be used, such as pulsed time of flight methods for example. An embodiment of the fiber network136inFIG.4is shown as fiber network420A inFIG.6. This embodiment includes a first fiber coupler430, a second fiber coupler436and low-transmission reflectors435,440. The first and second fiber couplers are 2×2 couplers each having two input ports and two output ports. Couplers of this type are usually made by placing two fiber cores in close proximity and then drawing the fibers. In this way, evanescent coupling between the fibers can split off a desired fraction of the light to the adjacent fiber. Light travels through the first fiber coupler430and splits between two paths, the first path through optical fiber433to the second fiber coupler436and in the second path through optical fiber422and fiber length equalizer423. Fiber length equalizer423connects to fiber138inFIG.4, which travels to the reference channel of the ADM electronics140. The purpose of the fiber length equalizer423is to match the length of optical fibers traversed by the light in the reference channel to the length of optical fibers traversed by light in the measure channel. Matching the fiber lengths in this way reduces ADM errors caused by changes in the ambient temperature. Such errors may arise because the effective optical path length of an optical fiber is equal to the average index of refraction of the optical fiber times the length of the fiber. Since the index of refraction of the optical fibers depends on the temperature of the fiber, a change in the temperature of the optical fibers causes changes in the effective optical path lengths of the measure and reference channels. If the effective optical path length of the optical fiber in the measure channel changes relative to the effective optical path length of the optical fiber in the reference channel, the result will be an apparent shift in the position of the target58, even if the target58is kept stationary. To avoid this problem, two steps are taken. First the length of the fiber in the reference channel is matched, as nearly as possible, to the length of the fiber in the measure channel. Second, the measure and reference fibers are routed side-by-side to the extent possible to ensure that the optical fibers in the two channels are subject to nearly the same changes in temperature. The light travels through optical fiber433to the second fiber optic coupler436and splits into two paths, the first path to the low-reflection fiber terminator440and the second path to optical fiber438, from which it exits the fiber network. Another embodiment of the fiber network136is shown inFIG.7. In this embodiment, the fiber network136includes a first fiber coupler457, a second fiber coupler463, two low-reflection terminations462,467, an optical switch468, a retroreflector472, and an electrical input469to the optical switch. The optical switch may be several types. A commercially available and relatively inexpensive type is the micro-electro-mechanical system (MEMS) type. This type may use small mirrors constructed for example as a part of a semiconductor structure. Alternatively, the switch could be a modulator, which is available for very fast switching at certain wavelengths and at a cost that is somewhat higher than a MEMS type switch. Switches may also be constructed of optical attenuators, which may respond to electrical signals and may be turned on and off by electrical signals sent to the attenuators. A description of the specifications that may be considered in selecting fiber-optic switches is given in U.S. Published Patent Application Publication No. 2011/0032509 to Bridges, the contents of which are incorporated by reference. In general, to obtain the desired performance and simplicity, the switch may be a fiber-optic switch. It should be appreciated that the optical switching concept described above should perform equally well in a fiber network based on two colors. The fiber network136contains an optical switch468and a retroreflector472. Ordinarily the light travels from fiber465through the upper port of optical switch468and out optical fiber470. However, on occasion, when the laser tracker is not measuring a target, the optical switch diverts the optical signal from the optical fiber465to the optical fiber471and into the retroreflector472. The purpose of switching the light to retroreflector472is to remove any thermal drift that may have occurred in the components of the ADM system. Such components might include, for example, opto-electronic components such as optical detectors, optical fibers of the ADM system, electrical components such as mixers, amplifiers, synthesizer, and analog-to-digital converters, and optical components such as lenses and lens mounts. For example, suppose that at a first time, the path length of the measure channel was found to be 20 mm longer than the reference channel with the optical switch468diverting the light to retroreflector472. Suppose that at a later time the measure channel path length was found to be 20.003 mm longer than the reference channel path length with the optical switch468diverting the light to retroreflector472. The ADM data processor would subtract 0.003 mm from subsequent ADM readings. It should be understood that this procedure would start anew whenever the tracker set the ADM value at a home position of the laser tracker. In an embodiment, the retroreflector472is a fiber-optic retroreflector472A ofFIG.8. This type of retroreflector472is typically a ferrule with the optical fiber polished at the end of the ferrule and covered with a coating473, which might be gold or multiple layers of thin dielectric films, for example. In another embodiment, the retroreflector472is a free space retroreflector472B ofFIG.9that includes a collimator474and a retroreflector476, which might be a cube-corner retroreflector slug, for example. Still another embodiment of fiber network136is shown inFIG.10. In this embodiment, the fiber network136includes a first fiber coupler1730, a second fiber coupler1740, a third fiber coupler1750and three low-reflection terminations1738,1748,1758. The light from optical fiber1781enters fiber network136at the input port. The light travels through a first fiber coupler1730. A portion of the light travels through optical fiber138and fiber length compensator for423before entering the reference channel of ADM electronics140. Some of the light travels through a second fiber coupler1740and a third fiber coupler1750before passing out of the fiber network onto optical fiber1753. The light from optical fiber1743enters into the third fiber coupler1750, where it is combined with the light from a second light source (not shown) via optical fiber1790to form a composite light beam that travels on optical fiber1753. The optical coupler1750is a dichroic coupler because it is designed to use two wavelengths. After the composite light beam carried in optical fiber1753travels out of the laser tracker and reflects off target58, it returns to the fiber network136. The light from the first light source passes through the third fiber coupler1750, the second fiber coupler1740, and enters optical fiber148, which leads to the measure channel of the ADM electronics140. The light from the second light source (not shown) returns to optical fiber1790and travels back toward the second light source (not shown). The couplers1730,1740, and1750may be of the fused type. With this type of optical coupler, two fiber core/cladding regions are brought close together and fused. Consequently, light between the cores is exchanged by evanescent coupling. In the case of two different wavelengths, it is possible to design an evanescent coupling arrangement that allows complete transmission of a first wavelength along the original fiber and complete coupling of a second wavelength over to the same fiber. Ordinarily there is not a complete (100 percent) coupling of the light into the coupler1750. However, fiber-optic couplers that provide good coupling for two or more different wavelengths are commercially available at common wavelengths such as 980 nm, 1300 nm, and 1550 nm. In addition, fiber-optic couplers may be commercially purchased for other wavelengths, including visible wavelengths, and may be designed and manufactured for other wavelengths. For example, inFIG.10, it is possible to configure a fiber optic coupler1750so that the first light at its first wavelength travels from optical fiber1743to optical fiber1753with low optical loss. At the same time, the arrangement may be configured to provide for a nearly complete coupling of the second light on optical fiber1790over to the optical fiber1753. Hence it is possible to transfer the first light and the second light through the fiber optic coupler and onto the same fiber1753with low loss. Optical couplers are commercially available that combine wavelengths that differ widely in wavelength. For example, couplers are commercially available that combine light at a wavelength of 1310 nm with light at a wavelength of 660 nm. For propagation over long distances with propagation of both wavelengths in a single transverse mode while having relatively low loss of optical power during propagation through the optical fiber, it is generally desirable that the two wavelengths be relatively close together. For example, the two selected wavelengths might be 633 nm and 780 nm, which are relatively close together in wavelength values and could be transmitted through a single-mode optical fiber over a long distance without a high loss. An advantage of the dichroic fiber coupler1750within the fiber network136is that it is more compact than a free space beam splitter. In addition, the dichroic fiber coupler ensures that the first light and the second light are very well aligned without requiring any special optical alignment procedures during production. Referring back toFIG.4, the scanner portion36may be embedded in a scanner such as that shown inFIG.11discussed herein below for example. The light, such as infrared light at about 1550 nm for example, from the scanner portion36travels along optical path150to the dichroic mirror142. The dichroic mirror142is configured to reflect the light from the scanner while allowing light from the laser tracker to pass through. The light from scanner portion36travels to the target58and returns along optical path152to annular aperture154. The returning light passes through the annular aperture154and along an outer beam path to reflect off of dichroic mirror142along optical path156back to the scanner portion36. In one embodiment, the outer beam path (defined by the annular aperture154) is coaxial with the inner beam path (defined by the aperture146). Advantages may be gained by returning the scanner light through the annular aperture154to avoid of unwanted light from the aperture146that could corrupt the light reflected off of target58. In the exemplary embodiment the aperture146and the annular aperture154are concentrically arranged. In this embodiment, the aperture146has a diameter of about 15 mm and the annular aperture154has an inner diameter of 15 mm and an outer diameter of 35 mm. It should be appreciated that in the exemplary embodiment the dichroic mirror142is positioned at the gimbal point50. In this manner, light from both the scanner portion36and the tracker portion34may appear to originate from the same point in the device30. In the exemplary embodiment, the tracker portion34emits a visible laser light, while the scanner portion36emits a light in the near infrared spectrum. The light from tracker portion34may have a wavelength about 700 nm and the light from the scanner portion36may have a wavelength of about 1550 nm. One embodiment of the scanner portion36is shown inFIG.11. In this embodiment, the scanner portion36includes a light emitter160that emits a light beam162through a collimator165. The light emitter160may be a laser diode that emits light at a wavelength in the range of approximately 1550 nm. It should be appreciated that other electromagnetic waves having, for example, a lesser or greater wavelength may be used. Light beam162may be intensity modulated or amplitude modulated, such as with a sinusoidal or rectangular waveform modulation signal. The light beam162is sent to the dichroic beam splitter142, which reflects the light beam162through the aperture146and onto the target58. In the exemplary embodiment, the light beam162is reflected off of a mirror170and a dichroic beam splitter172to allow the light beam162to travel along the desired optical path of light beams52,150. As will be discussed in more detail below, the use of a dichroic beam splitter172provides advantages in allowing for the incorporation of a color camera168that acquires images during operation. In other embodiments, the light emitter160may be arranged to directly transmit the light onto dichroic mirror142without first reflecting off a mirror170and a dichroic beam splitter172. As shown inFIGS.4and11, the outgoing light from the tracker and scanner portions34,36both pass through the same aperture146. The light from these tracker and scanner portions34,36are substantially collinear and travel along the optical path of light beam52ofFIG.1. On the return path, the light from the tracker portion34will have been reflected by a retroreflector target and hence is approximately collimated when it returns to the device30. The returning beam of tracker light passes back through aperture146, which is the same aperture through which it exited the device30. On the other hand, the light from the scanner portion36usually strikes a diffusely scattering object59and spreads over a wide angle as it returns. A small portion of the reflected light passes through an annual aperture154positioned to have its inner diameter to be the same as (or concentric with) the outer diameter of the aperture146. The returning light163reflects off the dichroic beam splitter, passes as beam of light163through the lens160, reflects off reflective surfaces180,178,176, and passes through a collection of lenses within the light receiver182before arriving at an optical detector. The returning scanner light is directed through the annular aperture154without including any light that may pass back through the inner aperture146. This provides advantages since the optical power of the outgoing beam is so much greater than the light returned by the object that it is desirable to avoid having back reflections off optical elements along the path of the inner aperture146. In an embodiment, an optional color camera168is arranged so that a portion of the light reflected by the object passes through the dichroic mirror172into a color camera168. The coatings on the dichroic mirror are selected to pass visible wavelengths picked up by a color camera while reflecting light at the wavelength emitted by the light emitter160. The camera168may be coupled to the receiver lens160with an adhesive or within a recess for example. The color camera168allows color pictures to be acquired, usually by making a few discrete steps at a time following acquisition of data points by the distance meter within the scanner. In an embodiment, a mask174is coaxially arranged on the optical axis behind the receiver lens160. The mask174has a large area in which the returning light beam163is allowed to pass unimpeded. The mask174has shaded regions positioned radially outward from the optical axis in order to reduce intensity of the returning light beam163in such a way as to make the intensities of the returning light more nearly comparable for different object distances from the device30. In an embodiment, a rear mirror176is arranged on the optical axis behind the mask174. The rear mirror176reflects the returning light beam163that is refracted by the receiver lens166towards a central mirror178. The central mirror178is arranged in the center of the mask174on the optical axis. In embodiments having a color camera168, this area may be shadowed by the color camera168. The central mirror178may be an aspherical mirror which acts as both a negative lens (i.e. increases the focal length) and as a near-field-correction lens (i.e. shifts the focus of the returning light beam163which is reflected by the target). Additionally, a reflection is provided only to the extent that the returning light beam163passes the mask174arranged on the central mirror178. The central mirror178reflects the returning light beam through a central orifice180in rear mirror176. A light receiver182having an entrance diaphragm, a collimator with filter, a collecting lens and an optical detector, is arranged adjacent rear mirror176opposite the mask174. In one embodiment, a mirror184deflects the returning light beam163by 90°. In one embodiment, the scanner portion36may have one or more processors186, which may be the same as or supplementary to the scanner processor electronics96ofFIG.3. The processor186performs control and evaluation functions for the scanner portion36. The processor186is coupled to communicate with the light emitter160and light receiver182. The processor186determines for each measured point the distance between the device30and the target58based on the time of flight of the emitted light beam162and the returning light beam163. In other embodiments, the processor186and its functionality may be integrated into the controller64, which may correspond to the scanner processor96, the master processor78, the computer74, or the networked elements76ofFIG.3. The optical distance meters of the tracker portion34and scanner portion36may determine distance using the principle of time-of-flight. It should be understood that the term time-of-flight is used here to indicate any method in which modulated light is evaluated to determine distance to a target. For example, the light from the tracker portion34or scanner portion36may be modulated in optical power (intensity modulation) using a sinusoidal wave. The detected light may be evaluated to determine the phase shift between a reference and a measure beam to determine distance to a target. In another embodiment, the optical power of the light may be modulated by pulsed light having an approximately rectangular shape. In this case, the leading edge of the pulse may be measured on the way out of the device30and upon return to the device30. In this case, the elapsed time is used to determine distance to the target. Another method involves changing the polarization state of light as a function of time by means of modulation of an external modulator and then noting the frequency of modulation at which returning light is extinguished after it is passed through a polarizer. Many other methods of measuring distance fall within the general time-of-flight category. Another general method of measuring distance is referred to as a coherent or interferometric method. Unlike the previous method in which the optical power of a beam of light is evaluated, coherent or interferometric methods involve combining two beams of light that are mutually coherent so that optical interference of the electric fields occurs. Addition of electric fields rather than optical powers is analogous to adding electrical voltages rather than electrical powers. One type of coherent distance meters involves changing the wavelength of light as a function of time. For example, the wavelength may be changed in a sawtooth pattern (changing linearly with periodic repetitions). A device made using such a method is sometimes referred to as frequency modulated coherent laser (FMCL) radar. Any method, coherent or time-of-flight, may be used in the distance meters of the tracker portion34and scanner portion36. Referring now toFIGS.12-14, an embodiment of the device is shown with front covers removed and some optical and electrical components omitted for clarity. In this embodiment the device30includes a gimbal assembly3610, which includes a zenith shaft3630and an optics bench assembly3620having a mating tube3622. The zenith shaft includes a shaft3634and a mating sleeve3632. The zenith shaft3630may be fabricated from a single piece of metal in order to improve rigidity and temperature stability.FIG.14shows an embodiment of an optics bench assembly3720and zenith shaft3630. The optics bench assembly3720includes a main optics assembly3650and a secondary optics assembly3740. The housing for the main optics assembly3650may be fabricated out of single piece of metal to improve rigidity and temperature stability and includes a mating tube3622. In an embodiment, the central axis of the mating tube3622is aligned with the central axis of the mating sleeve3632. In one embodiment, four fasteners3664attach secondary optics assembly3740to the main optics assembly3650. The mating tube3622is inserted into the mating sleeve3632and held in place by three screws3662. In an embodiment, the mating tube3622is aligned with this mating sleeve3632by means of two pins on one end of meeting tube3622, the pins fitting into holes3666. Although the gimbal assembly3610is designed to hold an optical bench3620, other types of devices such as a camera, a laser engraver, a video tracker, a laser pointer and angular measuring device, or a Light Detection and Ranging (LIDAR) system could be disposed on the zenith shaft3630. Due to the alignment registration provided by the mating sleeve3632, such devices could be easily and accurately attached to the gimbal assembly3610. In the exemplary embodiment, the tracker portion34is arranged within the main optics assembly3650, while the scanner portion36is disposed in the secondary optics assembly3740. The dichroic mirror142is arranged in the main optics assembly3650as shown inFIG.14. In operation, the device30has two modes of operation, as shown inFIG.15andFIG.16, depending on the level of accuracy desired. The first mode (FIG.15) uses the tracker portion34in combination with a cooperative target58, such as a retroreflector target, which might be a spherically mounted retroreflector (SMR) for example. In this first mode, the device30emits a light beam52that virtually passes through the gimbal point50, dichroic mirror142, and aperture146towards target58. The light52strikes the target58, and a portion of the light travels back along the same optical pathway through the aperture146and the dichroic mirror142to the tracker portion34. The device30then determines the distance from the device30to the target58as discussed herein above with respect toFIGS.4-10. In an embodiment, during this first mode of operation, the scanner portion36does not operate. In the second mode of operation shown inFIG.16, the scanner portion36emits a light beam162that reflects off of the dichroic mirror142and is emitted through the aperture146toward the target58. It should be appreciated that the scanner portion36may measure the distance to a noncooperative target and does not need a target such as a retroreflector to obtain measurements. The light reflects (scatters) off of the target58and a portion163of the light returns through the annular aperture154. As discussed above it is desirable for the returning light163to pass through the annular aperture154since this provides advantages in reducing back reflections from the optics which could corrupt the returning light signal. The returning light163reflects off of the dichroic mirror142back to the scanner portion36whereupon the distance from the device30to the target58is determined as discussed herein above with respect toFIG.11. The scanner portion36operates continuously as the payload structure46is rotated simultaneously about the azimuth axis44and the zenith axis48. In the exemplary embodiment, the path followed by the light beam162proceeds in a single direction (e.g. does not reverse) as the payload46rotates about the axis44,48. This pathway may be achieved by continuously rotating each of the zenith and azimuth motors in a single direction. Another way of stating this is to say that in the second mode, the beam is directed to an object surface while the zenith and azimuth angles are continuously and monotonically changing. Notice that the beam may be steered rapidly about one axis (either zenith or azimuth axis) while steered relatively more slowly about the other axis. In one embodiment, the movement of the payload46cases results in the light beam162following a spiral pathway. It should be appreciated that having the scanner portion36operate such that the path of the light beam162does not have to reverse provides several advantages over scanners that follow a raster-type pattern or a random pattern. First, a large amount of data may be efficiently collected since a reversal of direction is not required. As a result, the scanner portion36can effectively scan a large area while acquiring data at a high sample rate, such as more than one million three-dimensional points per second. Second, by proceeding continuously in a single direction, in the event that the light beam intersects with a person, the total energy deposited on an area of the person is small. This allows for a more desirable IEC 60825-1 laser categorization. In one embodiment, the tracker portion34emits a light beam52in the visible light spectrum. In this embodiment, the tracker portion34may emit the light beam52as the scanner portion36emits light162. This provides advantages since the visible light52from the tracker portion34provides a visible reference for the operator. Turning now toFIGS.17-18, a method of operating the device30is shown. The method190starts with selecting a mode of operation to tracker portion34in block192. The method then proceeds to block194where the tracker portion34is activated. The gimbal mechanism is then moved about the zenith and azimuth axes in block196to steer the light beam toward the target58. The light reflects off the cooperative target58and returns to the device30through aperture146in block198. The device30then calculates the distance from the device30the target58in block200. The azimuth and zenith angles are determined in block202and the three-dimensional coordinates (distance and two angles) for the measured point are determined. This process may be repeated until all the desired measured points have been determined. Referring now toFIG.18the method203is shown wherein the scanner portion36is selected in block204. The method203then proceeds to block206where the scanner portion36is activated. Where it is desirable to provide a visible reference light, the light from tracker portion34is activated in block208. The light is transmitted from the scanner portion36through the aperture146towards the target58. In the exemplary embodiment, the light from the scanner portion36is emitted along a pathway in a single direction (such as a spiral shape) without reversing direction as indicated in block209. The light is reflected off of the target58and back towards the device30. The returning light is received through the annular aperture154in block210. The distance from the device30the target58is determined in block212. The azimuth and zenith angles are determined in block214and coordinates (distance and two angles) to the measured point on target58are determined. The method of directing the beam of light from the scanner portion36to the object59may be carried out in different ways. In a first embodiment, light from the scanner portion36is directed with the gimbal assembly3610facing in the same general direction. In this mode of operation, the beam is directed to any desired point. In a second embodiment, light from the scanner portion36is directed with the gimbal assembly3610spinning at a relatively rapid constant rate about an axis, which might be either the azimuth axis or the zenith axis. The other axis is also moved but at a relatively slower rate. In this way, the beam is directed in a slow spiral. With the second embodiment, a thorough scan of a large volume can be quickly performed. Another advantage of the second embodiment is that the constantly moving beam intercepts the pupil of the human eye for a shorter time during its continual movement. Because of this, higher laser powers can be used while providing a desired IEC 60825-1 categorization. Referring now toFIG.19, another embodiment of the device30is shown having a first absolute distance meter within the tracker portion34and a second absolute distance meter within the scanner portion36, the portions34and36coupled to a payload structure46. In this embodiment, the tracker portion34and the scanner36do not emit light over a common optical pathway. The tracker portion34is arranged to direct the light beam52in a first radial direction while the scanner36is arranged to direct the light beam162in a second radial direction toward a surface58′. The first radial direction and second radial direction define an angle θ therebetween. In the exemplary embodiment, the angle θ is 90 degrees. In other embodiments, the angle θ is between 5 degrees and 180 degrees. However, any suitable angle may be used which allows the tracker portion34and the scanner portion36to be positioned within the payload structure46. It should be appreciated that as the payload structure46is rotated about the azimuth axis44, the tracker portion34and the scanner36will be oriented on the same azimuth angle. Referring now toFIG.20, another embodiment of the device30is shown having a tracker portion34and a scanner portion36. In this embodiment, the tracker portion is oriented in parallel with the scanner portion36and uses a mirror216to reflect the light52towards the dichroic beam splitter142. In this embodiment, the dichroic beam splitter142is configured to reflect the light52while allowing the light162from the scanner portion36to pass through. The light beams52,162pass through an aperture146and are directed along the optical axis A toward an angled rotating mirror218that is arranged to rotate about a horizontal axis48. The outbound light52,162reflects off of the mirror at the center C10where it is reflected off and deflected towards the target58(for the tracker portion) or the surface58′ (for the scanner portion). The center C10defines the origin of the reference system. The reflected light from the target58or surface58′ is reflected back off of the rotary mirror218and back toward the aperture146. The light52reflects off of the rotary mirror218at the center C10and back through the aperture146. The light52reflects off of dichroic mirror142and mirror216before returning to the tracker portion34. The returning light163reflects off the rotary mirror218and passes through the annular aperture154before returning to the scanner36. The direction of the emitted light52,162and the reflected light results from the angular positions of the rotary mirror218about the horizontal axis48and vertical axis44. The angular positions are measured by encoders54,56respectively. It should be appreciated that in one mode of operation, the measurements by the tracker portion34and scanner portion36are performed by the means of a fast rotation of the mirror16and the slow rotation of the payload structure46. Thus, the whole space may be measured, step by step, as the device progresses in a circle. In an embodiment, the beam of light from the scanner is adjustably focused rather than collimated. In geometrical optics, a focused beam of light is brought to a point, but in reality, the beam of light is brought to a beam waist near the calculated focus position. At the beam waist position, the width of the beam is at its smallest as the beam propagates. One advantage of sending a focused beam of light from the scanner is that a smaller beam can more accurately determine 3D coordinates at edges. For example, a smaller focused beam permits more accurate determination of hole diameter or of feature size. Another advantage of sending a focused beam of light from the scanner is that a focused beam can be steered to find the position of maximum reflectance of light from a tooling ball retroreflector, which is simply a shiny/highly-reflective metallic sphere. Such a method of directing a beam of light from the scanner to the tooling ball permits accurate determination of distance and angles to the tooling ball. Because of this, the tooling ball can be used as a target. With a device that combines scanner and tracker functionality, as illustrated herein, two types of targets are then made available: SMRs and tooling balls. The use of two different types of targets provides an easy method for getting the tracker and the scanner systems in the same frame of reference since the SMRs and tooling balls can both be held in the same magnetic nests distributed throughout an environment. In an embodiment, an adjustable focusing element39is added to other elements of the scanner36. This additional adjustable focusing element is shown inFIGS.21-26.FIG.21is similar toFIG.4except that the scanner36is shown to have two internal elements—scanner elements37and adjustable focusing mechanism39.FIG.22is similar toFIG.11except that an adjustable focusing mechanism39is included in the scanner36.FIGS.23,24are similar toFIGS.15,16except that the scanner36is shown to include scanner elements37and adjustable focusing mechanism39.FIG.25is similar toFIG.19except the scanner36is shown to include scanner elements37and adjustable focusing mechanism39. In an embodiment, the adjustable focusing mechanism39includes some basic lens elements, which may include optional elements2604,2606. In addition, the adjustable focusing mechanism39includes a lens element2602attached to a motorized adjustment stage2610configured to move the lens2602back and forth to obtain the desired adjustment. In an embodiment, the scanner electronics96ofFIG.3provides the electrical control of the motorized adjustment stage2610. Many types of lens assemblies and adjustment methods are known in the art for providing adjustable focus in a lens assembly. It is understood to one of ordinary skill in the art that any such methods may be used to provide adjustable focus in the present invention. While the invention has been described in detail in connection with only a limited number of embodiments, it should be readily understood that the invention is not limited to such disclosed embodiments. Rather, the invention can be modified to incorporate any number of variations, alterations, substitutions or equivalent arrangements not heretofore described, but which are commensurate with the spirit and scope of the invention. Additionally, while various embodiments of the invention have been described, it is to be understood that aspects of the invention may include only some of the described embodiments. Accordingly, the invention is not to be seen as limited by the foregoing description, but is only limited by the scope of the appended claims. Moreover, the use of the terms first, second, etc. do not denote any order or importance, but rather the terms first, second, etc. are used to distinguish one element from another. Furthermore, the use of the terms a, an, etc. do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced item.
50,743
11860277
DESCRIPTION A LIDAR system is configured to output a system output signal and to receive a system return signal. The system return signal includes light that was included in the system output signal and that was reflected by an object located outside of the LIDAR system. A time delay occurs between the light being output from the LIDAR system and returning to the LIDAR system. The LIDAR system also includes electronics that generate LIDAR data from the system return signal. The LIDAR data is generated from a portion of the system return signal that returns to the LIDAR system during a data window. The electronics tune the duration of the data window in response to the amount of the time delay. For instance, the electronics can tune the duration of the data window such that the duration of the data window increases for shorter time delays. When the duration of the data window is fixed rather than tuned, the duration of the data window needs to be undesirably short. The amount of the time delay changes in response to the distance between the LIDAR system and the reflecting object. For instance, the time delay increases as the separation distance increases. As a result, as the separation distance increases it becomes possible that the system return signal has not yet returned to the LIDAR system while the data window is open. To prevent this situation, the duration of the fixed data window is reduced to ensure that the system return signals are returning to the LIDAR system for the full duration of the data window. However, the shortened duration of the fixed data window means that the system return signals are often returning to the LIDAR system outside of the fixed data window. As a result, a portion of the return signal is not taken into account when generating the LIDAR data. Tuning the duration of the data window allows a larger portion of a returning system return signal to be used in the generation of the LIDAR data. Increasing the portion of a returning system return signal that is used in generating LIDAR data increases the reliability of the LIDAR data. FIG.1is a topview of a schematic of a LIDAR chip that can serve as a LIDAR system or can be included in a LIDAR system that includes components in addition to the LIDAR chip. The LIDAR chip can include a Photonic Integrated Circuit (PIC) and can be a Photonic Integrated Circuit chip. The LIDAR chip includes a light system2that includes a synchronization reference light source3configured to output a synchronization reference signal on a synchronization reference waveguide4. The light system2also includes a synchronization light source5configured to output a synchronization signal on a synchronization waveguide6. The light system2also includes a light source7configured to output a LIDAR signal on a source waveguide8. The synchronization reference signal, the synchronization signal, and the LIDAR signal can each have a different wavelength. The source waveguide8carries the LIDAR signal to a LIDAR engine9that processes the light signals from which the LIDAR data is generated. The LIDAR engine9includes a phase modulator11positioned along the synchronization waveguide6. The phase modulator11is configured to modulate the phase of the synchronization signal such that the phase modulator11outputs a modulated signal carried on the synchronization waveguide6. Suitable phase modulators11include, but are not limited to, Mach Zehnder modulators, PIN diodes operated in forward bias (carrier injection) mode, PN diodes operated in reverse bias (depletion) mode and devices based on electro-optic materials such as lithium niobate, and III-V based active devices such as semiconductor optical amplifiers (SOA). An intensity modulator12is optionally positioned along the synchronization waveguide6and is configured to modulate the intensity of the modulated signal and output the result on the synchronization waveguide6as an outgoing synchronization signal. The intensity modulator12can be configured to pass the synchronization signal without substantial attenuation or to attenuate the synchronization signal. Accordingly, an attenuated version or an unattenuated version of the synchronization signal can serve as the outgoing synchronization signal. In some instances, the intensity modulator is configured to pass the outgoing synchronization signal on the synchronization waveguide6as the outgoing synchronization signal without substantial attenuation and/or to attenuate the synchronization signal such that a light signal is not output on the synchronization waveguide6or is effectively not output on the synchronization waveguide6. Suitable intensity modulators16include, but are not limited to, PIN diodes operated in forward bias (carrier injection) mode, PN diodes operated in reverse bias (depletion) mode and devices based on electro-optic materials such as lithium niobate. The LIDAR engine9includes a combiner13that receives the outgoing synchronization signal from the synchronization waveguide6and also receives the LIDAR signal from the source waveguide8. The combiner is configured to combine the outgoing synchronization signal and the LIDAR signal into an outgoing LIDAR signal that is carried on a utility waveguide. Accordingly, the outgoing LIDAR signal has a contribution from the outgoing synchronization signal and the LIDAR signal. Suitable combiners include, but are not limited to, 1×2 y-junction couplers, 1×2 Multimode Interference (MMI) couplers, Wavelength Division Multiplexer (WDM) components such as echelle gratings, Arrayed Waveguide Gratings (AWGs) or Mach-Zehnder Interferometers (MZIs). The LIDAR engine9includes a facet17at which the utility waveguide12terminates. The utility waveguide12carries the outgoing LIDAR signal to the facet17. The facet17can be positioned such that the outgoing LIDAR signal traveling through the facet17exits the LIDAR chip and serves as a LIDAR output signal. For instance, the facet17can be positioned at an edge of the chip so the outgoing LIDAR signal traveling through the facet17exits the chip and serves as the LIDAR output signal. In some instances, the portion of the LIDAR output signal that has exited from the LIDAR chip can also be considered a system output signal. As an example, when the exit of the LIDAR output signal from the LIDAR chip is also an exit of the LIDAR output signal from the LIDAR system, the LIDAR output signal can also be considered a system output signal. Light from the LIDAR output signal travels away from the LIDAR system in the system output signal. The system output signal can travel through free space in the atmosphere in which the LIDAR system is positioned. The system output signal may be reflected by one or more objects in the path of the system output signal. When the system output signal is reflected, at least a portion of the reflected light travels back toward the LIDAR chip as a system return signal. Light from the system return signal can be carried in a first LIDAR input signal that is received by the LIDAR chip. In some instances, a portion of the system return signal can serve as the first LIDAR input signal. The LIDAR engine9includes a comparative waveguide18that terminates at a facet19. The first LIDAR input signals enters the comparative waveguide18through the facet19and serves as a comparative signal. The comparative waveguide18carries the comparative signal to a processing unit20configured to convert the optical signals to electrical signals from which LIDAR data (the radial velocity and/or distance between the LIDAR system and one or more objects located outside of the LIDAR system) is generated. A splitter22is positioned along the source waveguide8and is configured to move a portion of the LIDAR signal from the source waveguide8onto a LIDAR reference waveguide24as a LIDAR reference signal. The percentage of light transferred from the source waveguide8by the splitter22can be fixed or substantially fixed. For instance, the splitter22can be configured such that the power of the LIDAR signal transferred to the LIDAR reference waveguide24is a percentage of the power of the LIDAR signal. In some instances, the percentage is greater than 1%, 10% or20and/or less than 50%, or 60%. Suitable splitters22include, but are not limited to, optical couplers, y-junctions, tapered couplers, and Multi-Mode Interference (MMI) devices. The LIDAR reference waveguide24carries the LIDAR reference signal to a second combiner26. The second combiner26also receives the synchronization reference signal from the synchronization reference waveguide4. The second combiner26is configured to combine the synchronization reference signal and the LIDAR reference signal into a reference signal that is carried on a reference waveguide28. The reference waveguide28carries the reference signal to the processing unit20for further processing. Accordingly, the reference signal has a contribution from the synchronization reference signal and the LIDAR reference signal. Suitable combiners include, but are not limited to, y-junctions, tapered couplers, Multi-Mode Interference (MMI) devices, Wavelength Division Multiplexers such as echelle gratings and Arrayed Waveguide Gratings (AWGs). In some instances, a LIDAR chip constructed according toFIG.1is used in conjunction with a LIDAR adapter. In some instances, the LIDAR adapter can be physically and optically positioned between the LIDAR chip and the one or more reflecting objects and/or the field of view in that an optical path that the first LIDAR input signal(s) and/or the LIDAR output signal travels from the LIDAR chip to the field of view passes through the LIDAR adapter. Additionally, the LIDAR adapter can be configured to operate on the system return signal and the LIDAR output signal such that the first LIDAR input signal and the LIDAR output signal travel on different optical pathways between the LIDAR adapter and the LIDAR chip but on the same optical pathway between the LIDAR adapter and a reflecting object in the field of view. Additionally or alternately, the LIDAR adapter can be configured to operate on the system return signal and the LIDAR output signal such that the second LIDAR input signal and the LIDAR output signal travel on different optical pathways between the LIDAR adapter and the LIDAR chip but on the same optical pathway between the LIDAR adapter and a reflecting object in the field of view. An example of a LIDAR adapter that is suitable for use with the LIDAR chip ofFIG.1is illustrated inFIG.2. The LIDAR adapter includes multiple components positioned on a base. For instance, the LIDAR adapter includes a circulator100positioned on a base102. The illustrated optical circulator100includes three ports and is configured such that light entering one port exits from the next port. For instance, the illustrated optical circulator includes a first port104, a second port106, and a third port108. The LIDAR output signal enters the first port104from the utility waveguide12of the LIDAR chip and exits from the second port106as an assembly output signal. The assembly output signal includes, consists of, or consists essentially of light from the LIDAR output signal received from the LIDAR chip. Accordingly, the assembly output signal may be the same or substantially the same as the LIDAR output signal received from the LIDAR chip. However, there may be differences between the assembly output signal and the LIDAR output signal received from the LIDAR chip. For instance, the LIDAR output signal can experience optical loss as it travels through the LIDAR adapter and/or the LIDAR adapter can optionally include an amplifier110configured to amplify the LIDAR output signal as it travels through the LIDAR adapter. When one or more objects in the sample region reflect light from the assembly output signal, at least a portion of the reflected light travels back to the circulator100as an assembly return signal. At least a portion of the light from the assembly return signal enters the circulator100through the second port106.FIG.2illustrates the LIDAR output signal and the assembly return signal traveling between the LIDAR adapter and the sample region along the same optical path. The assembly return signal exits the circulator100through the third port108and is directed to the input waveguide16on the LIDAR chip. Accordingly, light from the assembly returned signal can serve as the first LIDAR input signal and the first LIDAR input signal includes or consists of light from the assembly return signal. Accordingly, the LIDAR output signal and the first LIDAR input signal travel between the LIDAR adapter and the LIDAR chip along different optical paths. As is evident fromFIG.2, the LIDAR adapter can optionally include optical components in addition to the circulator100. For instance, the LIDAR adapter can include components for directing and controlling the optical path of the LIDAR output signal and the LIDAR return signal. As an example, the adapter ofFIG.2includes an optional amplifier110positioned so as to receive and amplify the LIDAR output signal before the LIDAR output signal enters the circulator100. The amplifier110can be operated by electronics62allowing the electronics62to control the power of the LIDAR output signal. The optical components can include one or more beam-shaping components. For instance,FIG.2illustrates the LIDAR adapter including an optional first lens112and an optional second lens114. The first lens112can be configured to couple the LIDAR output signal to a desired location. In some instances, the first lens112is configured to focus or collimate the LIDAR output signal at a desired location. In one example, the first lens112is configured to couple the LIDAR output signal on the first port104when the LIDAR adapter does not include an amplifier110. As another example, when the LIDAR adapter includes an amplifier110, the first lens112can be configured to couple the LIDAR output signal on the entry port to the amplifier110. The second lens114can be configured to couple the LIDAR output signal at a desired location. In some instances, the second lens114is configured to focus or collimate the LIDAR output signal at a desired location. For instance, the second lens114can be configured to couple the LIDAR output signal on the facet19of the input waveguide16. The LIDAR adapter can also include one or more direction changing components such as mirrors or prisms.FIG.2illustrates the LIDAR adapter including a mirror115as a direction-changing component115that redirects the LIDAR return signal from the circulator100to the facet19of the input waveguide16. The LIDAR chips include one or more waveguides that constrains the optical path of one or more light signals. While the LIDAR adapter can include waveguides, the optical path that the signals travel between components on the LIDAR adapter and/or between the LIDAR chip and a component on the LIDAR adapter can be free space. For instance, the signals can travel through the atmosphere in which the LIDAR chip, the LIDAR adapter, and/or the base102is positioned when traveling between the different components on the LIDAR adapter and/or between a component on the LIDAR adapter and the LIDAR chip. As a result, the components on the adapter can be discrete optical components that are attached to the base102. When the LIDAR system includes a LIDAR chip and a LIDAR adapter, the LIDAR chip, electronics, and the LIDAR adapter can be included in a LIDAR assembly where the LIDAR chip, the LIDAR adapter, and all or a portion of the electronics are positioned on a common mount128. Suitable common mounts128include, but are not limited to, glass plates, metal plates, silicon plates and ceramic plates. As an example,FIG.3is a topview of a LIDAR system that includes the LIDAR chip and electronics62ofFIG.1and the LIDAR adapter ofFIG.2on a common mount128. AlthoughFIG.3illustrates the electronics62as located on the common mount128, all or a portion of the electronics can be located off the common mount128. When the light system2is located off the LIDAR chip, the light system can be located on the common mount128or off of the common mount128. Suitable approaches for mounting the LIDAR chip, electronics, and/or the LIDAR adapter on the common mount128include, but are not limited to, epoxy, solder, and mechanical clamping. The LIDAR systems ofFIG.3can include one or more system components that are at least partially located off the common mount128. Examples of suitable system components include, but are not limited to, optical links, beam-shaping components, polarization state rotators, beam steering components, optical splitters, optical amplifiers, and optical attenuators. For instance, the LIDAR systems ofFIG.3can include one or more beam-shaping components130that receive the assembly output signal from the adapter and output a shaped signal. The one or more beam-shaping components130can be configured to provide the shaped signal with the desired shape. For instance, the one or more beam-shaping components130can be configured to output a shaped signal that focused, diverging or collimated. InFIG.3, the one or more beam-shaping components130is a lens that is configured to output a collimated shaped signal. The LIDAR systems ofFIG.3can optionally include one or more beam steering components134that receive the shaped signal from the one or more beam-shaping components130and that output the system output signal. For instance,FIG.3illustrates a beam steering component134that receives the shaped signal from a beam-shaping component130. The electronics can operate the one or more beam steering components134so as to steer the system output signal to different sample regions135. The sample regions can extend away from the LIDAR system to a maximum distance for which the LIDAR system is configured to provide reliable LIDAR data. The sample regions can be stitched together to define the field of view. For instance, the field of view of for the LIDAR system includes or consists of the space occupied by the combination of the sample regions. Suitable beam steering components include, but are not limited to, movable mirrors, MEMS mirrors, optical phased arrays (OPAs), optical gratings, actuated optical gratings and actuators that move the LIDAR chip, LIDAR adapter, and/or common mount128. When the system output signal is reflected by an object136located outside of the LIDAR system and the LIDAR, at least a portion of the reflected light returns to the LIDAR system as a system return signal. When the LIDAR system includes one or more beam steering components134, the one or more beam steering components134can receive at least a portion of the system return signal from the object136. The one or more beam-shaping components130can receive at least a portion of the system return signal from the object136or from the one or more beam steering components134and can output the assembly return signal that is received by the adapter. The LIDAR system ofFIG.3includes an optional optical link138that carries optical signals to the one or more system components from the adapter, from the LIDAR chip, and/or from one or more components on the common mount. For instance, the LIDAR system ofFIG.3includes an optical fiber configured to carry the assembly output signal to the beam-shaping components130. The use of the optical link138allows the source of the system output signal to be located remote from the LIDAR chip. Although the illustrated optical link138is an optical fiber, other optical links138can be used. Other suitable optical links138include, but are not limited to, free space optical links and waveguides. When the LIDAR system excludes an optical link, the one or more beam-shaping components130can receive the assembly output signal directly from the adapter. The above LIDAR systems includes a variety of optical components that can serve as output components through which the system output signal exits the LIDAR system. In some instances, depending on the configuration of the LIDAR system, a beam steering component134, a beam-shaping component130, a facet of an optional optical link138such as an optical fiber, a port of a circulator100, or a facet of a utility waveguide can serve as an output component. In some instances, the output component also serves as an input component through which a system return enters the LIDAR system. For instance, in some instances, depending on the configuration of the LIDAR system, a beam steering component134, a beam-shaping component130, a facet of an optional optical link138such as an optical fiber, a port of a circulator100, or a facet of a utility waveguide can serve as an input component. FIG.4AthroughFIG.4Billustrate an example of a processing unit20that is suitable for use as the processing unit20in the above LIDAR systems. The processing unit20receives the comparative signal from the comparative waveguide18ofFIG.1and the reference signal from the reference waveguide24ofFIG.1. The processing unit includes a second splitter200that divides the comparative signal carried on the comparative waveguide150onto a first comparative waveguide204and a second comparative waveguide206. The first comparative waveguide204carries a first portion of the comparative signal to a first light-combining component211. The second comparative waveguide206carries a second portion of the comparative signal to a second light-combining component212. The processing component includes a first splitter202that divides the reference signal carried on the reference waveguide152onto a first reference waveguide210and a second reference waveguide208. The first reference waveguide210carries a first portion of the reference signal to the first light-combining component211. The second reference waveguide208carries a second portion of the reference signal to the second light-combining component212. The second light-combining component212combines the second portion of the comparative signal and the second portion of the reference signal into a second composite signal. The second portion of the comparative signal includes a contribution from the LIDAR signal and the synchronization signal. Additionally, the second portion of the reference signal includes a contribution from the LIDAR reference signal and the synchronization reference signal. As a result, the second composite signal includes a contribution from the LIDAR signal, the LIDAR reference signal, the synchronization signal and the synchronization reference signal. Due to the difference in frequencies between the LIDAR signal contribution and the LIDAR reference signal contribution, the LIDAR signal contribution and the LIDAR reference signal contribution are beating at a LIDAR beat frequency. The second light-combining component212also splits the resulting second composite signal onto a first auxiliary detector waveguide214and a second auxiliary detector waveguide216. The first auxiliary detector waveguide214carries a first portion of the second composite signal to a first auxiliary light sensor218that converts the first portion of the second composite signal to a first auxiliary electrical signal. The second auxiliary detector waveguide216carries a second portion of the second composite signal to a second auxiliary light sensor220that converts the second portion of the second composite signal to a second auxiliary electrical signal. Examples of suitable light sensors include germanium photodiodes (PDs), and avalanche photodiodes (APDs). In some instances, the second light-combining component212splits the second composite signal such that the portion of the comparative signal (i.e. the portion of the second portion of the comparative signal) included in the first portion of the second composite signal is phase shifted by 180° relative to the portion of the comparative signal (i.e. the portion of the second portion of the comparative signal) in the second portion of the second composite signal but the portion of the reference signal (i.e. the portion of the second portion of the reference signal) in the second portion of the second composite signal is not phase shifted relative to the portion of the reference signal (i.e. the portion of the second portion of the reference signal) in the first portion of the second composite signal. Alternately, the second light-combining component212splits the second composite signal such that the portion of the reference signal (i.e. the portion of the second portion of the reference signal) in the first portion of the second composite signal is phase shifted by 180° relative to the portion of the reference signal (i.e. the portion of the second portion of the reference signal) in the second portion of the second composite signal but the portion of the comparative signal (i.e. the portion of the second portion of the comparative signal) in the first portion of the second composite signal is not phase shifted relative to the portion of the comparative signal (i.e. the portion of the second portion of the comparative signal) in the second portion of the second composite signal. Examples of suitable light sensors include germanium photodiodes (PDs), and avalanche photodiodes (APDs). The first light-combining component211combines the first portion of the comparative signal and the first portion of the reference signal into a first composite signal. The first portion of the comparative signal includes a contribution from the LIDAR signal and the synchronization signal. Additionally, the first portion of the reference signal includes a contribution from the LIDAR reference signal and the synchronization reference signal. As a result, the first composite signal includes a contribution from the LIDAR signal, the LIDAR reference signal, the synchronization signal and the synchronization reference signal. Due to the difference in frequencies between the LIDAR signal contribution and the LIDAR reference signal contribution, the LIDAR signal contribution and the LIDAR reference signal contribution are beating at a LIDAR beat frequency. The first light-combining component211also splits the first composite signal onto a first detector waveguide221and a second detector waveguide222. The first detector waveguide221carries a first portion of the first composite signal to a first light sensor223that converts the first portion of the second composite signal to a first electrical signal. The second detector waveguide222carries a second portion of the second composite signal to a second auxiliary light sensor224that converts the second portion of the second composite signal to a second electrical signal. Examples of suitable light sensors include germanium photodiodes (PDs), and avalanche photodiodes (APDs). In some instances, the first light-combining component211splits the first composite signal such that the portion of the comparative signal (i.e. the portion of the first portion of the comparative signal) included in the first portion of the composite signal is phase shifted by 180° relative to the portion of the comparative signal (i.e. the portion of the first portion of the comparative signal) in the second portion of the composite signal but the portion of the reference signal (i.e. the portion of the first portion of the reference signal) in the first portion of the composite signal is not phase shifted relative to the portion of the reference signal (i.e. the portion of the first portion of the reference signal) in the second portion of the composite signal. Alternately, the first light-combining component211splits the composite signal such that the portion of the reference signal (i.e. the portion of the first portion of the reference signal) in the first portion of the composite signal is phase shifted by 180° relative to the portion of the reference signal (i.e. the portion of the first portion of the reference signal) in the second portion of the composite signal but the portion of the comparative signal (i.e. the portion of the first portion of the comparative signal) in the first portion of the composite signal is not phase shifted relative to the portion of the comparative signal (i.e. the portion of the first portion of the comparative signal) in the second portion of the composite signal. When the second light-combining component212splits the second composite signal such that the portion of the comparative signal in the first portion of the second composite signal is phase shifted by 180° relative to the portion of the comparative signal in the second portion of the second composite signal, the first light-combining component211also splits the composite signal such that the portion of the comparative signal in the first portion of the composite signal is phase shifted by 180° relative to the portion of the comparative signal in the second portion of the composite signal. When the second light-combining component212splits the second composite signal such that the portion of the reference signal in the first portion of the second composite signal is phase shifted by 180° relative to the portion of the reference signal in the second portion of the second composite signal, the first light-combining component211also splits the composite signal such that the portion of the reference signal in the first portion of the composite signal is phase shifted by 180° relative to the portion of the reference signal in the second portion of the composite signal. The first reference waveguide210and the second reference waveguide208are constructed to provide a phase shift between the first portion of the reference signal and the second portion of the reference signal. For instance, the first reference waveguide210and the second reference waveguide208can be constructed so as to provide a 90° phase shift between the first portion of the reference signal and the second portion of the reference signal. As an example, one reference signal portion can be an in-phase component and the other a quadrature component. Accordingly, one of the reference signal portions can be a sinusoidal function and the other reference signal portion can be a cosine function. In one example, the first reference waveguide210and the second reference waveguide208are constructed such that the first reference signal portion is a cosine function and the second reference signal portion is a sine function. Accordingly, the portion of the reference signal in the second composite signal is phase shifted relative to the portion of the reference signal in the first composite signal, however, the portion of the comparative signal in the first composite signal is not phase shifted relative to the portion of the comparative signal in the second composite signal. FIG.4Bprovides a schematic of the relationship between the electronics and the light sensors in a processing component. The symbol for a photodiode is used to represent the first light sensor223, the second light sensor224, the first auxiliary light sensor218, and the second auxiliary light sensor220but one or more of these sensors can have other constructions. In some instances, all of the components illustrated in the schematic ofFIG.4Bare included on the LIDAR chip. In some instances, the components illustrated in the schematic ofFIG.4Bare distributed between the LIDAR chip and electronics located off of the LIDAR chip. The electronics connect the first light sensor223and the second light sensor224as a first balanced detector225and the first auxiliary light sensor218and the second auxiliary light sensor220from the same processing component as a second balanced detector226. In particular, the first light sensor223and the second light sensor224are connected in series and the first auxiliary light sensor218and the second auxiliary light sensor220are connected in series. The serial connection in each of the first balanced detectors is in communication with a first data line228that carries the output from the first balanced detector as a first data signal. The serial connection in each of the second balanced detector is in communication with a second data line232that carries the output from the second balanced detector as a second data signal. The first data signals are each an electrical representation of a first composite signal and the second data signals are each an electrical representation of one of the second composite signals. Accordingly, each of the first data signals includes a contribution from a first waveform and a second waveform and the second data signal is a composite of the first waveform and the second waveform. The portion of the first waveform in a first data signal is phase-shifted relative to the portion of the first waveform in the second data signal but the portion of the second waveform in the first data signal is in-phase relative to the portion of the second waveform in the first data signal. For instance, the second data signal includes a portion of the reference signal that is phase shifted relative to a different portion of the reference signal that is included the first data signal. Additionally, the second data signal includes a portion of the comparative signal that is in-phase with a different portion of the comparative signal that is included in the first data signal. Each of the first data signals and the second data signals are beating as a result of the beating between one of the comparative signals and the associated reference signal, i.e. the beating in the first composite signal and in the second composite signal. Since a first data signal is an in-phase component and the associated second data signal its quadrature component, the first data signal and the associated second data signal together act as a complex data signal where the first data signal is the real component and the associated second data signal is the imaginary component of the input. The complex data signal is received at a LIDAR data generator234that processes the complex data signal so as to generate the LIDAR data (material indicator(s) and/or distance and/or radial velocity between the reflecting object and the LIDAR chip or LIDAR system). During operation of the LIDAR system, the electronics operate the light source7so the LIDAR signal's contribution to the system output signal (the LIDAR signal contribution) is output in a series of cycles.FIG.4Cshows the frequency of the LIDAR signal contribution over time. The frequency of the LIDAR signal contribution is repeated in a series of cycles.FIG.4Cshows two cycles labeled cycle d and cycle dj+1.FIG.4Clabels a base frequency of the LIDAR signal contribution fo. The base frequency (fo) can represent the lowest frequency of the LIDAR signal contribution during the cycles. Each cycle can be associated with a sample region in a field of view. Accordingly, during a cycle, the LIDAR system outputs the system output signal that is used to generate the LIDAR data for the sample region that is illuminated by the system output signal during that cycle. When the system output signal is steered to different sample regions, different cycles can be associated with different sample regions. Accordingly, the LIDAR data generated from different cycles can be for different sample regions. Each cycle includes K data periods that are each associated with a period index k and are labeled DPk. In the example ofFIG.4C, each cycle includes three data periods labeled DPkwith k=1, 2, and 3. Each data period starts at time tDPj,kwhere j represents the cycle index and k represents the period index. In some instances, the frequency versus time pattern is the same for the data periods that correspond to each other in different cycles as is shown inFIG.4C. Corresponding data periods are data periods with the same period index. As a result, each data period DP1can be considered corresponding data periods and the associated frequency versus time patterns are the same inFIG.4C. During the data period DP1, and the data period DP2, the electronics operate the light source7such that the frequency of the LIDAR signal contribution changes at a linear rate α. The direction of the frequency change during the data period DP1is the opposite of the direction of the frequency change during the data period DP2. The delay time required for a system output signal to exit the LIDAR system, travel to a reflecting object, and to return to the LIDAR system is labeled τj,kinFIG.4Cwhere j represents the cycle index and k represents the period index. The delay time can vary in response to a change in distance between the LIDAR system and a reflecting object. As a result, different delay time τj,kvalues are evident inFIG.4C. The LIDAR system is typically configured to provide reliable LIDAR data at a maximum operational distance between the LIDAR system and the object. The time required for a system output signal to exit the LIDAR system, travel the maximum distance for which the LIDAR system is configured to provide reliable LIDAR data and to return to the LIDAR system is labeled τMinFIG.4C. Since there is a delay between the system output signal being transmitted and returning to the LIDAR system, the composite signals do not include a contribution from the LIDAR signal until after the system return signal has returned to the LIDAR system. Since the composite signal needs the contribution from the LIDAR signal for there to be a LIDAR beat frequency, the electronics measure the LIDAR beat frequency that results from system return signal that return to the LIDAR system during a data window in the data period. The contribution from the LIDAR signal to the composite signals will be present at times larger than the maximum operational time delay (τM). As a result, the data window can extend from the maximum operational time delay (τM) to the end of the data period. When the object is close to the LIDAR system, the composite signals carry a contribution from the LIDAR signal early in the data period. As a result, the range of possible data windows can extend from the beginning of a data period to the end of the data period.FIG.4Clabels the full range of possible data windows in each data period as DGWR. As shown inFIG.4C, the range of possible data windows has a static portion extending from the maximum operational time delay (τM) to the end of the data period and a dynamic portion extending from the beginning of the data period to the maximum operational time delay (τM). The presence of the dynamic portion of the data window range allows the electronics to select the actual data window in response to the system return signal returning to the LIDAR system. For instance, the data windows can each start at or after the associated delay time (τj,k) as is evident from the data windows labeled DGW inFIG.4Cand can extend to the end of the data period. In some instances, there is a delay between the expiration of the delay time (τj,k) and the start of a data window. For instance, examples of data windows are labeled tDGinFIG.4C. Any delay between the expiration of the delay time (τj,k) and the start of the data window can be a result of time require for the electronics to identify the return of the system return signal associated with expiration of the delay time (τj,k) and/or can be programmed into the electronics. In some instance, the data windows (tDG) are the same as the data windows labeled DGW inFIG.4Cand there is no delay between the expiration of the delay time (τj,k) and the start of a data window. As is evident from comparing the different data windows (tDG) inFIG.4C, the duration of the data windows (tDG) can be variable between different data periods. The data windows (tDG) can include a static portion extending from the maximum operational time delay (cm) to the end of the data window (tDG) and a dynamic portion extending from the beginning of the data window (tDG) to the maximum operational time delay (τM). The variation in the duration of the data windows (tDG) can result from the duration of the dynamic portion of the data window being different in different data periods while the duration of the static portion of the data windows remains the same in different data periods. The reliability of the LIDAR data increases as the duration of the actual data windows (tDG) increases. The ability to adjust the data window in response to the return of the system return signal allows the duration of the data windows (tDG) to be increased and accordingly increases the reliability of the LIDAR. Each of the data windows (tDG) inFIG.4Cextends from a window opening time labeled toj,kto a window closing time labeled tfj,kwhere j represents the cycle index and k represents the period index. Although the window closing times (tfj,k) are shown as being the same as the end of the data period, the window closing times (tfj,k) need not be the same as the end of the data periods. Accordingly, the data windows (tDG) can close before the end of the data periods. In some instances, the data windows (tDG) have a duration that is greater than or equal to 60%, or 80% of the time between the delay time (τj,k) and the end of the data period. FIG.4Dcompares a synchronization signal to a portion of the frequency versus time graph ofFIG.4C. The synchronization signal and the graph are both shown relative to the same time axis. During a code portion of each data period (labeled C), the synchronization signal carries a binary code. As a result, the synchronization signal contribution to the system output signal carries the binary code. The code portion of each data period is illustrated as being divided into N+1 bits. InFIG.4D, the code portion of each cycle is illustrated as having 6 bits (N=5) for the purpose of simplifying the illustration and the following discussion. The bits in the code portion of each data period are each labeled bror bnwhere n represents a bit index that is an integer. For the purposes of simplicity, the bit indices (n) illustrated inFIG.4Dhave values of 1 through 5. The bit index can be assigned relative to time. For instance, a lower bit index is output from the LIDAR system before a bit with a higher bit index. The bit with bit index one (b1) can be the first bit carried by the system output signal at the start of the code portion of the cycle. Each code portion bit has a duration labeled Tp. The transmission of the code portion of the system output signal for data period kin cycle j can start at t=tDPj,kand end at t=tDPj,k+(N+1)*Tp. The duration of each data period can be represented by tDP. The value of ter can be selected such that tDP≥(N+1)*Tpto allow the code portion of the system output signal time to return to the LIDAR system before the start of the data period when the reflecting object is positioned at the maximum distance for which the LIDAR system is configured to provide reliable results. In some instances, ter (N+1)*Tp. As a result, the code portion can be transmitted for the full duration of the data period. When code portion is transmitted for the full duration of the data period, the intensity modulator12need not be present in the LIDAR system. In some instances, the electronics can operate the intensity modulator12so the transmission of the synchronization signal contribution to the system output signal is stopped between the time t=tDPj,k+(N+1)*Tpand time t=tDPj,k+tDP. Alternately, (N+1)*Tpcan be selected so the code portion is equal to tDP. Accordingly, the synchronization signal contribution to the system output signal can carry the code portion for the entire duration of the data period. In some instances, the bit durations (Tp) is ≤2*R/c where c represents the speed of light and R represents a range resolution that can be the min distinguishable distance between 2 adjacent targets. The range resolution (R) can be application specific. In some instances, the bit durations (Tp) are less than 10 ns, 5 ns, or In and can be as low as ns. The number of bits for the code portion of the synchronization signal contribution to the system output signal can be represented by N where N=tDP/Tpand tDP≥τM. In some instances, N is greater than 300, or 1000 and/or is less than 3000 or 10000. In an example where the maximum range is 200 m, N>1300. During each data period, the electronics operate the components of the LIDAR system such that the synchronization signal is encoded with a binary code during the code portion of the data period. For instance, when a light system2includes the illustrated synchronization light source5and phase modulator11, the electronics can operate the phase modulator11such that the synchronization signal is encoded with the binary code during the code portion of the data period. Examples of a suitable binary code include, but are not limited to, m-sequence. FIG.4Dincludes an example of a binary code labeled TDC. The binary code is divided into N bits. Each of the N bits carries a digit from the binary code. The illustrated binary code includes N=5 bits and consists of 0s and 1s. For instance, the code illustrated inFIG.4Cis represented by 0, 1, 1, 0, 0. Equivalent versions of the code can also be used. For instance, a bi-polar version of a binary code uses only the digits 1 and −1. An example of an equivalent bi-polar representation of the binary code 0, 1, 1, 0, 0 can be 1, −1, −1, 1, 1. The binary code is selected to have good autocorrelation properties. A code can be multiplied by a copy of the code to produce a numerical alignment indicator (autocorrelation value). The copy of the code can be a direct copy of the code or a different version of the code. When multiplying a code by the copy, the copy can be shifted relative to the code or can be unshifted relative to the code. When the copy is shifted relative to the code, the shift can be by one or more bits. When multiplying the code and the copy, each bit in the code is associated with one of the bits in the copy. When the copy is unshifted relative to the code, each bit from the code is associated with itself in the copy and the copy and the code are considered to be aligned. The shifting of the copy relative to the code changes the bits from the copy that are associated with the bits from the code. During multiplication of the code and the copy, each bit from the code is multiplied by the associated bit in the copy and the results from each bit multiplication are added to provide the alignment indicator. The alignment indicator can be generated for multiple different shifts of copy relative including a shift of zero bits (alignment). As a result, a function indicating a value of the alignment indicator versus the number of bits for the shift can be generated. The sequence of digits in autocorrelated codes are selected such that the value of the alignment indicator peaks when the code and copy are aligned but is constant or substantially constant at lower values when the code and copy are not aligned. Examples of suitable codes are the codes that have been developed for wireless systems and exist in mature standards such as the global third generation (3G) wideband code division multiple access (CDMA) standards. In some instances, the code is selected such that when the alignment indicator values are normalized to have a value from 0 to 1 with the alignment indicator at alignment having a value of 1, and when the copy is shifted away from alignment with the code in either or both directions by a number of bits called the shift number, the value of the alignment indicator is less than 0.1, or 0.05. The shift number can be greater than or equal to 1, 2, or 3. In some instances, this condition is maintained for each non-zero shift number in the code. In some instances, the value of the alignment indicator is less than 0.1 or 0.05 for each shift number greater than or equal to 1, 2, or 3 and/or less than 150 or 3000 when the copy is shifted by the shift number in one direction or both directions. When a light system2includes the illustrated synchronization light source5and phase modulator11, the electronics can operate the phase modulator11such that the code is carried in the phase of the synchronization signal. For instance, the phase of synchronization signal can be differential phase shifted according to the code using phase shift keying (PSK). In an example of differential phase shifting, the phase of the synchronization signal is changed by a first phase shift when the synchronization signal is to show a first digit of the binary code and is changed by a second phase shift when the synchronization signal is to show a second digit of the binary code. The amount of the first phase shift or the second phase shift can be zero degrees. As an example,FIG.4Dillustrates the bits in the code portion of the synchronization signal encoded by differential phase shifting to carry the binary code labeled TDC. Encoding by differential phase shifting carries data at the interface between adjacent bits. There may not be a bit before the code portion bit b1. As a result, a reference bit labeled bris added to the bits in the code portion of the synchronization signal. The reference bit (br) can have a set value that is not a function of the digits in the binary code. InFIG.4C, the reference bit labeled brcarries a value of 0 but it could carry a value of π. To apply a differential phase shift scheme to the binary code illustrated inFIG.4D, the first digit can be 0 and the second digit can be 1. The first phase shift can be 0 rad and the second phase shift can be π rad. As a result, the phase of the synchronization signal can be changed by 0 rad when the synchronization signal contribution to the system output signal is to show a 0 and is changed by π rad when the synchronization signal contribution to the system output signal is to show a 1. An example of how the differential phase shift scheme is applied to a binary code is provided inFIG.4D.FIG.4Dincludes a graph showing the phase of the synchronization signal contribution to the system output signal as function of time. The values of the first phase shift and the second phase shift are represented by an encoded phase shift labeled βnwhere n represents the bit index. Accordingly, βncan have a value of 0 (first phase shift) or π (second phase shift). The variable labeled BninFIG.4Drepresents the cumulative value of the βnvalues up to bit index n. The binary code labeled TDC is placed on the graph to show how values in the slots of the binary code translate to the βnvalues. The binary code slot labeled td1has a value of 0. As a result, the transition from bit brto bit b1shows an encoded phase shift (β1) of 0 radians. The binary code slot labeled td2has a value of 0. As a result, the transition from bit b1to bit b2shows an encoded phase shift (β2) of 0 radians. The binary code slot labeled td3has a value of 1. As a result, the transition from bit b2to bit b3shows an encoded phase shift (β3) of π radians. The binary code slot labeled td4has a value of 1. As a result, the transition from bit b3to bit b4shows an encoded phase shift (β4) of π radians. The binary code slot labeled td5has a value of 0. As a result, the transition from bit b4to bit b5shows an encoded phase shift (β5) of 0 radians. When the synchronization signal and the synchronization reference signal are continuous wave signals and the LIDAR system is operated as disclosed in the context ofFIG.4CthroughFIG.4D, the frequencies that can be present in the first data signal and the second data signal are illustrated inFIG.4E. The beat frequency between the LIDAR signal and the LIDAR reference signal is in a LIDAR signal band centered at DC and extending from −fmax,Lto +fmax,Lwhere fmax,L=(α*τM+|fdmax,L|) where fdmax,Lrepresents the maximum value of the Doppler frequency shift in the system output signal for which the LIDAR system is configured to operate. The beat frequency between the synchronization signal and the synchronization reference signal is in a synchronization signal band centered at (DC+δf) and extending from (DC+δf−fmax,s) to (DC+δf+fmax,s) where fmax,s=(1/Tp+|fdmax,s|) where fdmax,srepresents the maximum value of the Doppler frequency shift in the synchronization signal for which the LIDAR system is configured to operate. The location of the LIDAR signal band and the synchronization signal band can be a function of a variable δf where δf represents the frequency separation between the center of the synchronization signal band and the center of the LIDAR signal band. Overlap between the synchronization signal band and the LIDAR signal band can be avoided by selecting δf such that δf>2*max(fmax,L, fmax,s). As shown inFIG.4E, unwanted frequencies may be present in the first data signal and the second data signal. The location of the unwanted frequencies is related to the value of Δf where Δf=fr,s−fowhere fr,srepresents the frequency of the synchronization reference signal and forepresents the base frequency disclose in the context ofFIG.4C. The value of Δf can be such that Δf>(3δf+α*tDP) to separate the unwanted frequencies from the desired frequencies by moving the unwanted frequencies to frequencies above the synchronization signal band and the LIDAR signal band; where a represents the rate of change (chirp rate) to the LIDAR signal contribution as disclosed above and ter represents the duration of the data period. FIG.4Fis a block diagram of an example of a suitable LIDAR data generator234. The LIDAR data generator234includes a separator236that receives the first data signal from the first data line228and the second data signal from the second data line232. The first data signal and the second data signal each carries a LIDAR signal contribution, the LIDAR reference signal contribution, synchronization signal contribution, and a synchronization reference signal contribution. The separator236is configured to separate the LIDAR signal contribution and the LIDAR signal reference contribution carried in the first data signal from the LIDAR signal contribution and the LIDAR signal reference contribution carried in the first data signal. The separator236is also configured to separate the LIDAR signal contribution and the LIDAR reference signal contribution carried in the second data signal from the LIDAR signal contribution and the LIDAR signal reference contribution carried in the second data signal. The separator236outputs the synchronization signal contribution and the synchronization reference signal contribution extracted from the first data signal on a first synchronization line238as a first separated synchronization signal. The separator236outputs the synchronization signal contribution and the synchronization reference signal contribution extracted from the second data signal on a second synchronization line240as a second separated synchronization signal. The first separated synchronization signal and the second separated synchronization signal act together as a complex separated synchronization signal. The complex separated synchronization signal is received at a return identifier242. The return identifier processes the complex separated synchronization signal so as to identify the delay time (τj,kinFIG.4C). The return identifier242outputs a return identification signal that indicates the delay time (τj,k). The separator236outputs the contribution and the LIDAR signal reference contribution carried extracted from the first data signal on a first LIDAR line244as a first separated LIDAR signal. The separator outputs the contribution and the LIDAR signal reference contribution extracted from the second data signal on a second LIDAR line246as a second separated signal. The first separated LIDAR signal and the second separated LIDAR signal act together as a complex separated LIDAR signal. The complex separated LIDAR signal and the return identification signal are received at a frequency identifier250. The frequency identifier250uses the delay time (τj,k) to identify the data window (labeled tDGinFIG.4C). The frequency identifier250uses the portion of the complex separated LIDAR signal that is generated from system return signals that return to the LIDAR system within the selected data window to identify the LIDAR beat frequency. The frequency identifier250outputs a frequency signal that indicates the identified beat frequency. The frequency signal is received at a data generator252that uses the identified frequency to generate the LIDAR data for an object that reflected the system return signals. As a result, the LIDAR data is generated from system return signals that return to the LIDAR system during the data window but system return signals that return to the LIDAR system outside the data window are not and/or need not be used in the generation of the LIDAR data. FIG.4Gis a schematic of an example LIDAR data generator234suitable for use as the LIDAR data generator234ofFIG.4F. The LIDAR data generator234includes a separator236that receives the first data signal from the first data line228and the second data signal from the second data line232. The LIDAR data generator234optionally includes amplifiers298configured to amplify the first data signal and the second data signal. The separator236includes a first multiplier300that receives the first data signal and a second multiplier302receives the second data signal. As evident fromFIG.4E, the synchronization signal band is centered at (DC+δf). As a result, the first multiplier300is configured to downconvert the first data signal from (DC+δf) such that the synchronization signal band is centered at DC. Additionally, the second multiplier302is configured to downconvert the second data signal from (DC+δf) such that the synchronization signal band is centered at DC. The first multiplier300outputs the converted first data signal and the converted second data signal. The separator236includes a first filter304that receives the converted first data signal. The first filter304is selected to filter out the LIDAR signal contribution, the LIDAR reference signal contribution, and the undesired higher frequency components discussed in the context ofFIG.4F. As a result, the first filter304passes the synchronization signal contribution and the synchronization reference signal contribution in a first filtered signal. The separator236also includes a second filter306that receives the converted second data signal. The second filter306is selected to filter out the LIDAR signal contribution, the LIDAR reference signal contribution, and the undesired higher frequency components discussed in the context ofFIG.4F. As a result, the second filter306passes the synchronization signal contribution and the synchronization reference signal contribution in a second filtered signal. The first filtered signal and the second filtered signal together serve as a complex filtered signal. The complex filtered signal is received at the return identifier242. Suitable first filters304and/or second filters306includes, but are not limited to, lowpass filters, and filter pairs with matching responses. The return identifier242includes a first Analog-to-Digital Converter (ADC)340that receives the first filtered signal and converts the first filtered signal from an analog form to a digital form and outputs a first digital data signal. The return identifier242includes a second Analog-to-Digital Converter (ADC)342that receives the second filtered signal and converts the second synchronization contribution signal from an analog form to a digital form and outputs a second digital data signal. To generate a digital form of the complex filtered signal, the first Analog-to-Digital Converter (ADC)340and the second Analog-to-Digital Converter (ADC)342each periodically samples one of the filtered signals. As a result, the first digital data signal and the second digital data signal each carries a series of ADC samples of one of the filtered signals. As described above, the system output signal carries multiple bits of a code. As a result, the synchronization signal contribution to the system return signal and the filtered signals each carry multiple bits. The sampling rate of the first Analog-to-Digital Converter (ADC)340and the second Analog-to-Digital Converter (ADC)342can be selected such that each bit in each of the filtered signals is sampled multiple times. Accordingly, the first digital data signal and the second digital data signal each carries multiple ADC samples from each bit of one of the filtered signals. The first digital data signal is received at a delay344and a multiplier346. The second digital data signal is also received at the delay344and the multiplier346. The delay344delays the first digital data signal by a delay period and outputs a delayed first digital data signal. The delay344delays the second digital data signal by the delay period and outputs a delayed second digital data signal. The amount of the delay period can be equal to or substantially equal to the bit duration Tp. The delayed first digital data signal and the delayed second digital data signal are received at a conjugator348. The conjugator348generates the conjugate of the complex signal resulting from the delayed first digital data signal and delayed second digital data signal. As a result, the conjugator348outputs a conjugate signal that carries the conjugate of the complex signal represented by the delayed first digital data signal and delayed second digital data signal. The multiplier346receives the conjugate signal and multiplies the conjugate signal by the complex signal carried by the combination of the first digital data signal and the second digital data signal. Since the conjugate signal is generated from delayed signals but the digital data signal and the second digital data signal are not delayed, the multiplier multiplies a delayed signal by a non-delayed signal. Since the amount of delay can be equal to the bit duration (Tp), the delayed signal and the non-delayed signal are from adjacent bits. The multiplier outputs a code signal that carries a scaled and phase-rotated version of the code. The code signal is received at a matched filter350configured to convert the code signal from a square form to a triangular form. The matched filter350is matched to the system output signal. For instance, the matched filter350can convolve the code signal with a matched filter impulse response that is a square wave matched to the code signal. A correlator352receives the convolved code signal from the matched filter350. The correlator352multiples the binary code or an equivalent version of the binary code by the convolved code signal so as to generate an alignment indicator as described above. The alignment indicator is generated for multiple shifts of the convolved code signal to generate data indicating the value of the alignment indicator versus the degree of shifting between the binary code or an equivalent version of the binary code and the convolved code signal, i.e. versus the number of bits for the shift. The correlator outputs a correlation signal indicating the value of the alignment indicator versus the degree of shifting. As will become evident below, the alignment indicator can be a complex number. The correlation signal is received by a power component356that generates a power signal that indicates a power level of the correlation signal versus the degree of shifting. For instance, the power component can calculate the value of Re2+Im2from the alignment indicator where Re represents the real component of the alignment indicator and Im represents the imaginary component of the alignment indicator. The power signal is received at a peak finder358that identifies a peak in the power signal that is a result of a system output signal being reflected by an object located outside of the LIDAR system. The output of the peak finder358is received at a delay identifier360that uses the identified peak to determine the delay time (τj,k). As noted above, the synchronization signal contribution to the system output signal carries data from a code arranged in a series of bits. As a result, the contribution of the synchronization signal to the system return signal and the resulting complex filtered signal also carries this code arranged in the same series of bits.FIG.4Hincludes an arrow labeled S that illustrates the complex filtered signal. To illustrate the portions the complex filtered signal that are associated with different bits, the code and phase versus bit pattern fromFIG.4Dare copied intoFIG.4H. Each of the different bits is labeled b1through b5and is positioned over the portion of the complex filtered signal that carries the data from that bit. As discussed above, the illustrated bits are associated with the encoded phase shifts. InFIG.4H, the times where the first Analog-to-Digital Converter (ADC)340and the second Analog-to-Digital Converter (ADC)342sample the complex filtered signal are illustrated by the circles labeled sn,kwhere n is the bit index and k is an ADC sample index. The number of ADC samples per bit can be represented by M. In the illustrated example, the bits are sampled at a rate of twice per bit (M=2). The ADC samples are arranged in the complex filtered signal (labeled S) such that as time increases, the value of the bit index (n) stays constant while the value of the ADC sample index (k) increases from 1 to M. After the value of the ADC sample index (k) reaches M, the value of the bit index is increased by 1, the ADC sample index (k) is re-set to one and the sequence repeated. The electronics can operate one or more optical components so as to provide the synchronization reference signal and the synchronization signal with the desired characteristics. For instance, the electronics can operate one or more components selected from a group consisting of the light system2, phase modulator11, and intensity modulator12such that the synchronization reference signal can be represented by Sref(t)=cos(2πfrst) and the synchronization signal can be represented by Stx(t)=cos(2πfst+Bn) where frsrepresents the frequency of the synchronization reference signal and fsrepresents the frequency of the synchronization signal and Bnrepresents the cumulative encoded phase shift for the bit with bit index n as disclosed in the context ofFIG.4C. The synchronization reference signal is not encoded with the binary code while the synchronization signal carries the binary code as is evident from the presence of the term Bnin Stx(t). When the synchronization reference signal and the synchronization signal are represented by Sref(t) and Stx(t) as described above, each ADC sample carried by the first digital data signal and the second digital data signal can be represented by sn,k=Aejϕnwhere A represents the amplitude and the j indicates a complex value (i.e. sqrt(−1)) and n represents the bit index. As a result, the ADC sample can also be represented by A(cos(ϕn)+j*sin(ϕn)). The variable ϕnrepresents the phase during the ADC sample with bit n and can be determined from ϕn=θ0+ωd*t+n*ωd*Tp+Bnwhere θ0is a constant that can be zero, t represents time, cod is the Doppler frequency, n is bit index, Tprepresents the bit duration. During the conjugation and multiplication performed by the multiplier346and conjugator348, the code signal can be generated by multiplying the ADC sample sn,kby cn−1,k, where cn−1,krepresents the conjugate of the ADC sample sn−1,k. As a result, the code signal can be represented by a series of CS samples represented by dn,k=A2ejωd*Tpejβnwhere βnhas a value of 0 radians or π radians that can change in response to changes in the bit index n as discussed in the context ofFIG.4C. The code signal is labeled C inFIG.4H. The circles on the code signal are each vertically aligned with a circle on the complex filtered signal. The vertically aligned circles correspond to the same ADC sample. The CS samples are arranged in the complex filtered signal (labeled S) such that as time increases, the value of the bit index (n) stays constant while the value of the value of the ADC sample index (k) increases from 1 to M. After the value of the ADC sample index (k) reaches M, the value of the bit index is increased by 1, the ADC sample index (k) is re-set to one and the sequence repeated. The multiplication of the ADC sample by the conjugate of a previous ADC sample removes the ωd*t term that was present in phase (ϕn) of the ADC samples (sn,k=Aejϕn) from the CS samples (dn,k=A2ejωd*Tpejβn) where cod represent a Doppler shift that induces sinusoids in the ADC as a result of LIDAR echoes. The use of differential phase keying combined with this multiplication removes this sine wave from the below LIDAR data solutions. The matched filter receives the code signal and outputs the convolved code signal labeled P inFIG.4F. Circles on the convolved code signal are each vertically aligned with a circle on the complex filtered signal. The circles on the code signal are each vertically aligned with a circle on the complex filtered signal. The vertically aligned circles correspond to the same ADC sample. The matched filter is configured to convert the convolved code signal from a square form to a triangular form that is output from the matched filter as the convolved code signal. The convolved code signal (CCS) can carry a series of CCS samples represented by pn,kwhere n is the bit index and k is a sample index. The CCS samples are arranged in the convolved code signal (labeled P) such that as time increases, the value of the bit index (n) stays constant while the value of the value of the ADC sample index (k) increases from 1 to M. After the value of the ADC sample index (k) reaches M, the value of the bit index is increased by 1, the ADC sample index (k) is re-set to one and the sequence repeated. Each pn,kis associated with one of the ADC samples. The value of CCS sample pn,kcan be generated by convolving the code signal and the matched filter impulse response. In some instances, the filter impulse response is a square wave matched to the bit shape or pulse shape of the code signal. The code signal can be convolved with the matched filter impulse response to produce 2M−1 different convolution values for each of the bits. Each of the different convolution values can be labeled vn,mwhere n represents the bit index and m is an integer with a value is from 1 to 2M−1. The different convolution values can be generated by identifying the portion of the code signal associated with the CCS samples in the same bit. Below, the identified portion of the code signal is called the common bit portion. For instance, the CS samples d2,1and d2,2are associated with the same bit having bit index n=2. As a result, CS samples d2,1and d2,2represent a common bit portion of the code signal. The common bit portion can be multiplied by the filter impulse response. For instance, the filter impulse can be a signal having a series of M samples represented by fqwhere q is an index for the filter impulse response index and extends from 1 to M. The different convolution values (vn,m) for a single bit with value n can result from shifting the filter impulse response different degrees relative to the common bit portion and calculating the convolution value vn,mfor each degree of shift. The shift can be by one or more CS samples and is done such that at least one CS sample in the common bit portion is associated with one of the filter impulse responses. The shifting of the filter impulse response relative to the common bit portion changes the samples from the filter impulse response that are associated with the CS samples from the common bit portion. During multiplication of the code and the filter impulse response, each CS sample from the common bit portion is multiplied by the associated sample from the filter impulse response and the results from each bit multiplication are added to provide the convolution values (vn,m). When a CS sample from the common bit portion is not associated with a sample from the filter impulse response, the unassociated CS sample is multiplied by 0. When a sample from the filter impulse response is not associated with a CS sample from the common bit portion, the unassociated sample from the filter impulse response is multiplied by 0. Each convolution value (vn,m) is associated with one of the ADC samples. For instance, convolution value vn,mcan be associated with ADC sample sn,m. However, there are M−1 more convolution values than there are ADC samples associated with a bit, i.e. 2M−1 is greater than M. For convolution values with m>M, the convolution value (vn,m) is associated with ADC sample (sn+1,m-M). As a result, convolution values (vn,m) from different bits can be associated with the same ADC sample. The convolution values associated with the same ADC sample are added together to get the value of the CCS sample pn,kassociated with that ADC sample. When a single convolution value (vn,m) is associated with an ADC sample, that convolution value serves as the value of the CCS sample (pn,k) associated with that ADC sample. FIG.4Iillustrates an example convolution. The example shows two ADC samples per bit, i.e. M=2. Accordingly, there are 2M−1=3 convolution values per bit and two filter impulse response samples represented by f1and f2. The convolution values for bit n=1 and n=2 are shown and a portion of the convolution values for n=0 and n=3 are shown. Since the convolution values v1,3and v2,1are associated with the same ADC sample p2,1; the values of v1,3and v2,1are added to determine that p2,1=v1,3+v2,1. The convolved code signal is received by the correlator352. The correlator includes a first tapped delay line300. The first tapped delay line300includes delay cells302that each receives one of the CCS samples (pn,k). The correlator includes a second tapped delay line304. The second tapped delay line304includes second delay cells306that each receives one of the CCS samples (pn,k) from the first tapped delay line300. The CCS samples (pn,k) that the second delay cells306receive from the first tapped delay line300are separated by M−1 delay cells302. For instance, the above illustration uses M=2 ADC samples per bit. As a result, the CCS samples (pn,k) provided to the second tapped delay line304are separated by one delay cell. The correlator352multiplies at least a portion of the bi-polar version of the binary code by the CCS samples (pn,k) in the second delay cells306.FIG.4Jshows a portion of the bi-polar version of the binary code for the purposes of simplicity. Each digit in the binary code is associated with one of the second delay cells406as shown inFIG.4J. The correlator352includes several multipliers408that each multiplies one of the digits from the binary code by the contents of the associated second delay cell406. The correlator also includes an adder409that adds the multiplication results so as to generate the alignment indicator. After generating an alignment indicator, the CCS samples (pn,k) in the delay cells402are each shifted in the same direction by the same amount. For instance, the CCS samples (pn,k) in the delay cells402can each be shifted by one or more cells within the first tapped delay line400. In some instances, the CCS samples (pn,k) in the delay cells402are each shifted by a single cell. As a result, one or more CCS samples (pn,k) exit the first tapped delay line400and one or more CCS samples (pn,k) enter the first tapped delay line400from the convolved code signal. The changing of the CCS samples (pn,k) in the delay cells402leads to a change in the CCS samples (pn,k) in the second delay cells406. The multipliers408each multiplies the digits from the binary code by the contents of the associated second delay cell406and the adder409adds the multiplication results. The convolved code signal that carries the CCS samples (pn,k) carries a version of the binary code because the pn,kvalues are a product of dn,k=A2ejωd*Tpejβnvalues where the βnhave values that correspond to the digits of the binary code. For instance, the βnvalues of 0 radians can correspond to binary code values of 0 and βnvalues of π radians can correspond to binary code values of 1. As a result, the multiplication of the bi-polar version of the binary code by the convolved code signal is effectively a multiplication of two different versions of the binary code. As a result, an alignment indicator results from the multiplications and addition performed by the multipliers408and adder409. Accordingly, the adder409outputs the alignment indicator. The CCS samples (pn,k) in the second delay cells406are shifted again and yet another alignment indicator is generated. The process of shifting the CCS samples (pn,k) in the second delay cells and generating an alignment indicator is repeated so as to generate data indicating the value of the alignment indicator as a function of time. As a result, the correlation signal output from the correlator indicates a series of alignment indicators that can each be represented by aqwhere q is an alignment indicator index. The series of alignment indicator values in the correlation signal indicates the value of the alignment indicator versus the degree of shifting. The correlation signal is received by a power component356. The power component outputs a power signal that indicates a power level of the correlation signal versus the degree of shifting. For instance, the power component can calculate the value of Re2+Im2for all or a portion of the alignment indicators (aq) where Re represents the real component of the alignment indicator a q and Im represents the imaginary component of the alignment indicator aq. Accordingly, a value of Re2+Im2can be generated for the values of the alignment indicators (aq) in the correlation signal. The determination of Re2+Im2removes the ωd*Tpterm that is present in the phase (ϕn) of the samples (sn,k=Aejϕn) from the calculation of the LIDAR data. FIG.4Kis a graph that includes an example of the power signal versus time. Accordingly, the graph includes a curve showing the value of the alignment indicators (aq) versus time. The graph includes a location labeled “start of data period” and a location labeled “end of data period.” The “start of data period” can indicate the start of a data period such as the data period disclosed in the context ofFIG.4C. As a result, the “start of data period” can indicate when the system output signal with a synchronization signal contribution carrying bit b1is transmitted from the LIDAR system. The “end of data period” can indicate the end of the data period at time t=tDPj,k+tDP. The graph also includes a location labeled “start of correlation cycle” and a location labeled “end of correlation cycle.” The “start of correlation cycle” can indicate when the CCS sample (p1,1) enters the second delay cells406. The “end of cycle” can indicate when the CCS sample (pN,M) enters the second delay cells406. In some instances, the duration of the correlation cycle is equal to the duration of the data period so the data from sequential data periods can be calculated in series. There is a system delay labeled dsbetween the “start of data period” and the “start of correlation cycle.” The system delay (ds) can be the result of delays from one or more sources selected from the group consisting of delays in electronics such as a delay caused by the matched filter, delays from other sources, and/or delays induced by the system or system operator. The degree of shifting shown on the x-axis can be represented by the number of CCS sample (pn,k) shifts that occur after the shift where the CCS sample p1,1enters the second delay cells406. The time increases with increasing numbers of shifts and the time increase can be linear or substantially linear. As a result, the degree of shifting can also represent correlation time (t′) where the correlation time (t′) is equal to 0 at the “start of correlation cycle.” When the duration of the correlation cycle is equal to the duration of the data period, the correlation time (t′) can be equal to tDPat the “end of correlation cycle.” The power signal includes a peak labeled AC. The AC peak is a result of the convolved code signal that carries the CCS samples (pn,k) carrying a version of the binary code. Since the binary code has good autocorrelation properties and the convolved code signal carries a version of the binary code, the CCS samples (pn,k) in the convolved code signal have similar autocorrelation properties. A characteristic of good autocorrelation properties is that the alignment indicators peak when different versions of the code are aligned. Since the power signal values are derived from the alignment indicators provided by the convolved code signal, the power signal also shows a peak when there is alignment between the CCS samples (pn,k) carried in convolved code signal and the bi-polar version of the binary code. Accordingly, the peak labeled AC inFIG.4Hcorresponds to alignment between the bi-polar version of the binary code and the code carried in the convolved code signal. The power signal is received at the peak finder358ofFIG.4G. As is shown inFIG.4K, the power signal can include one or more peaks that are a result of noise and/or other reflecting objects in the power signal. The peak finder is configured to identify one or more peaks in the power signal with a power level above a noise threshold. Each peak above the noise threshold results from the system output signal being reflected by an object located outside of the LIDAR system. When the ωdterm is not removed from the results, the associated sinusoid increases the difficulty of finding these peaks. The elimination of this sinusoid from the results increases the accuracy of the peak identification. Suitable peak finders358include, but are not limited to, peak finding algorithms. The output of the peak finder358is received at the delay identifier360ofFIG.4G. The delay identifier360determines the value of the correlation time (t′) at the identified peak. When the duration of the correlation cycle is equal to the duration of the data period, the value of the correlation time (t′) when alignment occurs between the convolved code signal and the bipolar version of the two-digit signal represents or substantially represents the amount of delay between the code being transmitted from the LIDAR system and returning to the LIDAR system. Accordingly, the delay identifier360can output a return identification signal that indicates the delay time (τj,k). The delay time indicator can quantify the delay time but need not actually quantify time. For instance, the delay time indicator can be other data that represent the delay time. As an example, the delay time indicator can indicate the number of bits that are transmitted before alignment occurs. As shown inFIG.4G, the separator236includes a third filter362that receives the first data signal and a fourth filter364that receives the second data signal. The third filter362is selected to filter from the first data signal the synchronization signal contributions, the synchronization reference signal contributions, and the undesired higher frequency components discussed in the context ofFIG.4F. As a result, the third filter362passes the LIDAR signal contribution and the LIDAR reference signal contribution in a third filtered signal. The fourth filter364is selected to filter from the second data signal the synchronization signal contributions, the synchronization reference signal contributions, and the undesired higher frequency components discussed in the context ofFIG.4F. As a result, the fourth filter364passes the LIDAR signal contribution and the LIDAR reference signal contribution in a fourth filtered signal. As is evident fromFIG.4F, the LIDAR signal band is at DC so upconversion and/or downconversion is not needed before the filtering of the first data signal and/or the second data signal. Suitable third filters362and/or fourth filters364include, but are not limited to, lowpass filters, and filter pairs with matching responses. The third filtered signal and the fourth filtered signal together serve as a second complex filtered signal. The second complex filtered signal is received at the frequency identifier250. The frequency identifier250includes a memory370configured to store the second complex filtered signal. Suitable memories include, but are not limited to, buffers. The frequency identifier250includes a window identifier371that receives the return identification signal that indicates the delay time (τj,k) from the return identifier242. The window identifier371uses the delay time (τj,k) to identify the data window (labeled tDGinFIG.4C). For instance, the window identifier371can set the data window (tDG) as extending from the delay time (τj,k) to the end of the data period or some smaller window within the time period extending from the delay time (τj,k) to the end of the data period. As an example, the window identifier371can set the data window (tDG) as extending from the time (tDPj,k+τj,k) to time (tDPj,k+tDP) or some smaller portion of the time period within the time from (tDPj,k+τj,k) to (tDPj,k+tDP). In some instances, a data window is selected to have a duration that exceeds the length of time between τMand the end of the data period while being smaller than the length of time from (tDPj,k+τj,k) to time (tDPj,k+tDP). An example of a smaller portion of the data window within the time (tDPj,k+τj,k) to time (tDPj,k+tDP) includes a data window from data window within the time (tDPj,k+τj,k+dly) to time (tDPj,k+tDP) where dly represents a programmed delay. In the above data window examples, the identified data window (tDG) is a function of the delay time (τj,k). Since the value of the delay time (τj,k) is a function of the distance between the LIDAR system and a reflecting object, the identified data window (tDG) is also a function of the distance between the LIDAR system and a reflecting object. Accordingly, the data window (tDG) is dynamic and changes during operation of the LIDAR system. The frequency identifier250includes a transform mechanism372that receives the identified data window from the window identifier371. The transform mechanism372identifies the portion of the second complex filtered signal that is stored in the memory370and was generated from system return signals that returned to the LIDAR system during the identified data window. The transform mechanism372includes a mathematical operation component374configured to receive the identified portion of the second complex filtered signal. The mathematical operation component374is configured to perform a mathematical operation on the identified portion of the second complex filtered signal. Examples of suitable mathematical operations include, but are not limited to, mathematical transforms such as Fourier transforms. The mathematical transform can be a complex transform such as a complex Fast Fourier Transform (FFT). A complex Fast Fourier Transform (FFT) can provide an output that indicates magnitude as a function of frequency. As a result, a peak in the output of the complex transform can occur at and/or indicate the correct solution for the LIDAR beat frequency. The mathematical operation component374can execute the attributed functions using firmware, hardware or software or a combination thereof. The output of the mathematical operation component374is received at a LIDAR data component376. The LIDAR data component376can perform a peak find on the output of the mathematical operation component374to identify the peak in the frequency of the output of the mathematical operation component374. The LIDAR data component376treats the frequency at the identified peak as the LIDAR beat frequency. The LIDAR data component376can use the identified beat frequencies in combination with the frequency pattern of the system to generate the LIDAR data. The electronics can combine the LIDAR beat frequencies (fLDP) from two or more different data periods to generate LIDAR data. For instance, the beat frequency determined from DP1inFIG.4Ccan be combined with the beat frequency determined from DP2inFIG.4Cto determine the LIDAR data. As an example, the following equation applies during a data period where electronics increase the frequency of the LIDAR signal contribution during the data period such as occurs in data period DP1ofFIG.4C: fub=−fd+ατ where fubis the LIDAR beat frequency determined from DP1in this case, fdrepresents the Doppler shift (fd=2νf0/c) where f0represents the base frequency, c represents the speed of light, ν is the radial velocity between the reflecting object and the LIDAR system where the direction from the reflecting object toward the LIDAR system is assumed to be the positive direction, and c is the speed of light. The following equation applies during a data period where electronics decrease the frequency of the LIDAR signal contribution such as occurs in data period DP2ofFIG.4C: fdb=−fd−ατ where fdbis the LIDAR beat frequency determined from DP2in this case. In these two equations, fdand τ are unknowns. The electronics solve these two equations for the two unknowns. The radial velocity for the sample region then be quantified from the Doppler shift (ν=c*fd/(2f0)) and/or the separation distance for that sample region can be quantified from c*τ/2. In some instances, more than one object is present in a sample region. In these cases, more than one LIDAR beat frequency may be present in a data period. Each of the LIDAR beat frequencies can be associated with a different object. The LIDAR beat frequencies that result from the same object in different data periods of the same cycle can be considered corresponding frequency pairs. LIDAR data can be generated for each corresponding frequency pair output by the transform. As a result, separate LIDAR data can be generated for each of the objects in a sample region. The data period labeled DP3inFIG.4Cis optional and allows the LIDAR beat frequencies belonging to corresponding frequency pairs to be identified. For instance, during the feedback period in DP1for cycle2and also during the feedback period in DP2for cycle2, more than one frequency pair can be matched. In these circumstances, it may not be clear which LIDAR beat frequency from DP2corresponds to which LIDAR beat frequency peaks from DP1. As a result, it may be unclear which LIDAR beat frequencies need to be used together to generate the LIDAR data for an object in the sample region. As a result, there can be a need to identify corresponding frequencies. The identification of corresponding frequencies can be performed such that the corresponding frequencies are frequencies from the same reflecting object within a sample region. The data period labeled DP3can be used to find the corresponding frequencies. LIDAR data can be generated for each pair of corresponding frequencies and is considered and/or processed as the LIDAR data for the different reflecting objects in the sample region. An example of the identification of corresponding frequencies uses a LIDAR system where the cycles include three data periods (DP1, DP2, and DP3) as shown inFIG.4C. When there are two objects in a sample region illuminated by the system output signal, two different LIDAR beat frequencies can be determined for fub: fu1and fu2during DP1and another two different LIDAR beat frequencies for fdb: fd1and fd2during DP2. In this instance, the possible frequency pairings are: (fd1, fu1); (fd1, fu2); (fd2, fu1); and (fd2, fdu2). A value of fdand τ can be calculated for each of the possible frequency pairings. Each pair of values for fdand τ can be substituted into f3=−fd+α3τ0to generate a theoretical f3for each of the possible frequency pairings. The value of α3is different from the value of α used in DP1and DP2. InFIG.3C, the value of α3is zero. In this case, the transform components268also outputs two values for f3that are each associated with one of the objects in the sample region. The frequency pair with a theoretical f3value closest to each of the actual f3values is considered a corresponding pair. LIDAR data can be generated for each of the corresponding pairs as described above and is considered and/or processed as the LIDAR data for a different one of the reflecting objects in the sample region. Each set of corresponding frequencies can be used in the above equations to generate LIDAR data. The generated LIDAR data will be for one of the objects in the sample region. As a result, multiple different LIDAR data values can be generated for a sample region where each of the different LIDAR data values corresponds to a different one of the objects in the sample region. As noted above, the power signal versus time graph shown inFIG.4Hmay have multiple peaks that are above the noise threshold as a result of the system output signal being reflected by multiple objects. As a result, the delay identifier260may output multiple roundtrip times that are each associated with a different one of the objects. The LIDAR system can be configured to generate LIDAR data for each of the different objects. For instance, the electronics can include multiple distance finders262that each determines the distance between the object and the LIDAR system for one of the objects and/or multiple velocity calculators264that each determines a radial velocity between the object and the LIDAR system for one of the objects. Additionally or alternately, the electronics can include a distance finder262that serially determines the distance between one of the objects and the LIDAR system for two or more of the objects and/or a velocity calculator264that serially determines the radial velocity between one of the object and the LIDAR system for two or more of the objects. The data period labeled DP2inFIG.4Cis also optional. For instance, the LIDAR data component376can determine the delay time (τj,k) from the return identification signal and determine the separation distance from c*τj,k/2. The radial velocity can then be determined from DP1. For instance, the following equation applies during a data period where electronics increase the frequency of the LIDAR signal contribution during the data period such as occurs in data period DP1ofFIG.4C: −fub+ατj,k=fdwhere fubis the LIDAR beat frequency determined from DP1in this case, fdrepresents the Doppler shift (fd=2νf0/c) where f0represents the base frequency, c represents the speed of light, ν is the radial velocity between the reflecting object and the LIDAR system where the direction from the reflecting object toward the LIDAR system is assumed to be the positive direction, and c is the speed of light. This equation can be solved for fdand the radial velocity for the sample region then be quantified from (ν=c*fd/(2f0)). Since the data period labeled DP2and the data period labeled DP3are optional, each cycle can include a single data period. As is evident from the above discussions, in some instances, a single electrical line illustrated above carries a complex signal. Suitable platforms for the LIDAR chips include, but are not limited to, silica, indium phosphide, and silicon-on-insulator wafers.FIG.5is a cross-section of portion of a LIDAR chip constructed from a silicon-on-insulator wafer. A silicon-on-insulator (SOI) wafer includes a buried layer310between a substrate312and a light-transmitting medium314. In a silicon-on-insulator wafer, the buried layer310is silica while the substrate312and the light-transmitting medium314are silicon. The substrate312of an optical platform such as an SOI wafer can serve as the base for the entire LIDAR chip. For instance, the optical components shown on the above LIDAR chips can be positioned on or over the top and/or lateral sides of the substrate312. FIG.5is a cross section of a portion of a LIDAR chip that includes a waveguide construction that is suitable for use in LIDAR chips constructed from silicon-on-insulator wafers. A ridge316of the light-transmitting medium extends away from slab regions318of the light-transmitting medium. The light signals are constrained between the top of the ridge316and the buried oxide layer310. The dimensions of the ridge waveguide are labeled inFIG.5. For instance, the ridge has a width labeled w and a height labeled h. A thickness of the slab regions is labeled T. For LIDAR applications, these dimensions can be more important than other dimensions because of the need to use higher levels of optical power than are used in other applications. The ridge width (labeled w) is greater than 1 μm and less than 4 μm, the ridge height (labeled h) is greater than 1 μm and less than 4 μm, the slab region thickness is greater than 0.1 μm and less than 3 μm. These dimensions can apply to straight or substantially straight portions of the waveguide, curved portions of the waveguide and tapered portions of the waveguide(s). Accordingly, these portions of the waveguide will be single mode. However, in some instances, these dimensions apply to straight or substantially straight portions of a waveguide. Additionally or alternately, curved portions of a waveguide can have a reduced slab thickness in order to reduce optical loss in the curved portions of the waveguide. For instance, a curved portion of a waveguide can have a ridge that extends away from a slab region with a thickness greater than or equal to 0.0 μm and less than 0.5 μm. While the above dimensions will generally provide the straight or substantially straight portions of a waveguide with a single-mode construction, they can result in the tapered section(s) and/or curved section(s) that are multimode. Coupling between the multi-mode geometry to the single mode geometry can be done using tapers that do not substantially excite the higher order modes. Accordingly, the waveguides can be constructed such that the signals carried in the waveguides are carried in a single mode even when carried in waveguide sections having multi-mode dimensions. The waveguide construction disclosed in the context ofFIG.5is suitable for all or a portion of the waveguides on the above LIDAR chips. Light sensors that are interfaced with waveguides on a LIDAR chip can be a component that is separate from the chip and then attached to the chip. For instance, the light sensor can be a photodiode, or an avalanche photodiode. Examples of suitable light sensor components include, but are not limited to, InGaAs PIN photodiodes manufactured by Hamamatsu located in Hamamatsu City, Japan, or an InGaAs APD (Avalanche Photo Diode) manufactured by Hamamatsu located in Hamamatsu City, Japan. These light sensors can be centrally located on the LIDAR chip. Alternately, all or a portion the waveguides that terminate at a light sensor can terminate at a facet located at an edge of the chip and the light sensor can be attached to the edge of the chip over the facet such that the light sensor receives light that passes through the facet. The use of light sensors that are a separate component from the chip is suitable for all or a portion of the light sensors selected from the group consisting of the first light sensor and the second light sensor. As an alternative to a light sensor that is a separate component, all or a portion of the light sensors can be integrated with the chip. For instance, examples of light sensors that are interfaced with ridge waveguides on a chip constructed from a silicon-on-insulator wafer can be found in Optics Express Vol. 15, No. 21, 13965-13971 (2007); U.S. Pat. No. 8,093,080, issued on Jan. 10, 2012; U.S. Pat. No. 8,242,432, issued Aug. 14, 2012; and U.S. Pat. No. 6,108,8472, issued on Aug. 22, 2000 each of which is incorporated herein in its entirety. The use of light sensors that are integrated with the chip are suitable for all or a portion of the light sensors selected from the group consisting of the first light sensor and the second light sensor. A light source selected from a group consisting of a reference light source3, a synchronization light source5, and a light source7that is interfaced with a ridge waveguide can be a gain element that is a component separate from the LIDAR chip and then attached to the LIDAR chip. For instance, the light source can be a gain element or laser chip that is attached to the LIDAR chip using a flip-chip arrangement. Use of flip-chip arrangements is suitable when a light source is to be interfaced with a ridge waveguide on a chip constructed from silicon-on-insulator wafer. Examples of suitable interfaces between flip-chip gain elements and ridge waveguides on chips constructed from silicon-on-insulator wafer can be found in U.S. Pat. No. 9,705,278, issued on Jul. 11, 2017 and in U.S. Pat. No. 5,991,484 issued on Nov. 23, 1999; each of which is incorporated herein in its entirety. The constructions are suitable for use as the light source10. When the light source10is a gain element, the electronics62can change the frequency of the outgoing LIDAR signal by changing the level of electrical current applied to through the gain element. Components such as a phase modulator11and an intensity modulator12can be a component that is separate from the LIDAR chip and then attached to the chip. For instance, the attenuator can be included on an attenuator chip that is attached to the chip in a flip-chip arrangement. Suitable electronics can include, but are not limited to, a controller that includes or consists of analog electrical circuits, digital electrical circuits, processors, microprocessors, digital signal processors (DSPs), Field Programmable Gate Arrays (FPGAs), computers, microcomputers, or combinations suitable for performing the operation, monitoring and control functions described above. In some instances, the controller has access to a memory that includes instructions to be executed by the controller during performance of the operation, control and monitoring functions. Although the electronics are illustrated as a single component in a single location, the electronics can include multiple different components that are independent of one another and/or placed in different locations. Additionally, as noted above, all or a portion of the disclosed electronics can be included on the chip including electronics that are integrated with the chip. Although the synchronization signal is disclosed in the context of code division multiplexing with a binary code, the LIDAR system can use multi-digit codes with more than two digits. For instance, the LIDAR system can use quadrature phase shift keying to encode the system output signal. The LIDAR system is disclosed as having data periods that each has the same duration (tDP), however, different data periods in the same cycle can have different durations. Other embodiments, combinations and modifications of this invention will occur readily to those of ordinary skill in the art in view of these teachings. Therefore, this invention is to be limited only by the following claims, which include all such embodiments and modifications when viewed in conjunction with the above specification and accompanying drawings.
103,463
11860278
DETAILED DESCRIPTION Embodiments herein describe a robotic system that uses range sensors to identify a vector map of an environment. Rather than generating an occupancy grid which has the disadvantages described above, the vector map includes lines that outline the shape of objects in the environment (e.g., shelves or pallets disposed on the floor of a warehouse). The lines defining the vector map can be compactly stored in memory and have the same accuracy as the range sensors used to capture the range data. The robot can then use the vector map to safely navigate the environment. In one embodiment, the robotic system generates 2D or 3D point clouds using the range data captured by range sensors mounted on the robot. Using clustering, the system identifies one or more line segments representing the boundary or outline of the objects in the environment. The robotic system can repeat this process at different locations as it moves in the environment—e.g., every six inches or every several feet. However, the orientation and translation of the line segments at each location where the robot collects the sensor data may be off due to errors in the internal navigation system of the robot. Put differently, at a first location the robot identifies a first line segment defining a boundary of a shelf, but at a second location, the robot identifies a second line segment which corresponds to the same shelf but has a translation or orientation different from the first line segment. Put differently, the first and second line segments do not clearly align even though they were formed from range data corresponding to same object in the environment. To account for the error in the internal navigation system, in one embodiment the robotic system performs iterative closest vector (ICV) analysis which iterates between the line segments generated by the robot at two different locations to determine the translational and orientation differences between the segments. The line segments with the smallest translational and orientation differences are most likely lines that correspond to the same object. The robotic system can then use an estimated covariance of the matched line segments to merge the line segments into a line (or vector) that can be stored in the vector map. In this manner, lines or vectors defining the shapes of objects in the environment can be used to form a vector map of the environment. FIG.1illustrates a vector map100generated by a robot105, according to various embodiments. As shown, the vector map100includes object boundaries110that are the boundaries or shapes of objects within a real-world environment traversed by the robot105. For example, the vector map100may be a map of a warehouse containing the robot105where the object boundaries110represent locations of walls, shelves, pallets, and other objects disposed on the floor of the warehouse. In other examples, the vector map100could map other types of indoor environments (e.g., homes, offices, stores, shipping containers, or parking garages) as well as outdoor environments (e.g., sporting venues, parks, or loading docks). The vector map100includes gaps between the object boundaries110which may be outside the range of sensors115used by the robot when traversing the path150. That is, the robot105may not be close enough to these objects to detect their boundaries when traversing the path150. However, in some embodiments, the robot105may include a navigation system that identifies incomplete portions of the map100and instructs the robot to return to these areas to map unknown regions. In one embodiment, the sensor115includes a depth sensor, a time-of-flight camera, or Lidar. In addition to the range sensor115for detecting objects, the robot105also includes a movement system120which propels the robot105in the environment. The movement system120can include wheels, tracks, legs, and the like for moving the robot105along the path150. An internal navigation system in the movement system120can output odometry data135which tracks the orientation and movement of the robot105in the system environment. For example, the internal navigation system may track the rotation of a wheel to determine how far the robot has moved, or monitor the output of an inertial measurement unit (IMU) and/or accelerometers and gyroscopes to determine an orientation or acceleration of the robot105along the path150. The robot105has a memory125that stores the range data130generated by the range sensor115and the odometry data135generated by the movement system120. The memory125also includes a line generator140and a line matcher145which may be applications, software modules, firmware, hardware, or combinations thereof which use the range data130and the odometry data135to detect the object boundaries110for the vector map100. In one embodiment, the line generator140uses the range data130to generate line segments at various locations along the path150. That is, every few inches or feet, the robot105activates the range sensor115to generate updated range data130which the line generator140converts into one or more line segments. Because the line segments at one location may not align with the line segments at a previous location due to errors in the odometry data135, as described in more detail below, the line matcher145determines which line segments generated at the current location match line segments generated at the previous location where range data130was gathered. Stated differently, the line matcher145identifies lines segments identified at a first location that correspond to the same objects as line segments identified at a second location, thereby indicating the line segments should be merged because they represent the same boundary of a particular object. By merging line segments, the line matcher145generates lines which represent the object boundaries110illustrated in the vector map100. In one embodiment, the lines (or line segments) defining the object boundaries110can be stored in the memory125using significantly less memory than using an occupancy grid which divides the environment into plurality of cells. Further, the lines forming the object boundaries110can be stored as scatter matrices which store the center of mass of the lines, the mass (or points of the line), and the orientation of the line in the environment, which further reduces the amount of memory125used to store the vector map100. FIG.2is a flowchart of a method200for generating a vector map using range sensors on a moving robot, according to various embodiments. The method200begins at block205where the line generator on the robot performs clustering to convert range data in a depth cloud into a first set of line segments at a first location. As described in more detail inFIG.3, the line generator forms a 2D or 3D depth cloud from the range data captured by a sensor on the robot (e.g., depth sensor, a time-of-flight camera, or Lidar). From the depth cloud, the line generator forms one or more lines segments which indicate the boundaries of objects in the environment around the robot. In one embodiment, the line generator uses an ICV algorithm to generate the line segments. Using ICV rather than using an iterative closest points (ICP) approach to identify the boundary of an object may yield computational efficiency, although the line segments have to first be identified from the points. At block210, the line generator performs clustering to convert range data in a depth cloud into a second set of line segments at a second location. That is, the line generator on the robot may repeat the process used to generate the first set of line segments at block205at a different location in order to generate the second set of line segments. In one embodiment, the robot may stop at the first location, generate the first set of line segments using range data captured at the first location, move to the second location, and then stop to generate the second set of lines segments using updated range data. However, in other embodiments, the robot may capture the range data while moving (e.g., without stopping). Ideally, the segments in the first line segments would align with corresponding segments in the second line segments thereby indicating line segments that pertain to the same object. That is, the line segments captured at the two locations (assuming the locations are sufficiently close together) may at least partially overlap or at least have a similar orientation. However, the locations and orientation of the line segments at the two locations may be determined by an internal navigation system such as an IMU or wheel rotation tracker in the movement system120illustrated inFIG.1. Due to errors in these components, the line segments identified at the two locations may not overlap or have the same orientation. Thus, it may not be apparent which lines segments in the first and second sets correspond to the same object in the environment. At block215, the line matcher matches the first set of lines segments to corresponding segments in the second set of line segments to form lines in the vector map of the environment. That is, the line matcher can overcome the error in the internal navigation system in the robot to identify which line segments in the first and second sets correspond to the same physical object in the environment. The details for matching the lines segments are described later inFIG.8. One advantage of the method200is that a vector map generated from the matched line segments is inherently resolved as the robot moves to places in the environment it has mapped previously. One difficulty in environment mapping using simultaneous localization and mapping (SLAM) is identifying and resolving locations in the environment that a robot has previously mapped. This is important to identify whether there is a new object in the environment or seemingly new object is a previously mapped object which, due to inaccuracies in the mapping process, now appears to be in a different location or orientation. However, using the SLAM techniques described herein, the mapping system can automatically detect and resolve previously detected objects with current measured sensor data as well as identify new objects. This improves the ability of the mapping system to identify old and new objects in the environment. FIG.3is a flowchart of a method300for generating line segments from point clouds representing objects in the environment, according to various embodiments. In one embodiment, the method300illustrates a specific technique for identifying the first and second sets of line segments at blocks205and210of the method200inFIG.2. The method300begins at block305where the line generator generates a first depth cloud containing a plurality of points when the robot is at the first location. In one embodiment, the points are generated from the range data captured by distance or range sensors on the robot. Deriving points from the range data and generating a depth cloud is shown inFIG.4. FIG.4illustrates depth cloud points405representing objects in the environment, according to various embodiments. In one embodiment, the points405are part of a 2D depth cloud that represents objects disposed on a surface in an environment400in which the robot105travels. The points405can be generated using a depth sensor, time-of-flight sensor, or Lidar. For example, the points405here could represent a corner of a wall or two shelves that intersect. Note that inFIG.4there are no points405at the corner where the sides of the object (or objects) intersect. This may be because the corner of the object is beyond the range of the sensor on the robot105. Moreover, although points405in a 2D depth cloud or 2D plane are shown, in other embodiments, the robot may identify points in a 3D depth cloud (where points could be at various distances above the surface of the environment400on which the robot105traverses). Thus, rather than identifying the footprint of the objects at the floor of the environment400as shown inFIG.4, a 3D depth cloud could be used to identify a shape of the object as it extend in a vertical direction away from the floor. Returning to method300, at block310, the line generator identifies the first set of line segments from the first depth cloud. In one embodiment, the line generator uses clustering to identify points that should be grouped together to form a line. For example, the line generator can use clustering to identify the points in the group that define a line in a common direction. The embodiments herein are not limited to any specific clustering algorithms to evaluate the depth clouds in order to identify line segments. FIG.5illustrates using clustering to identify line segments from the depth cloud points inFIG.4, according to various embodiments. InFIG.5, the line generator identifies the line segment505A and line segment505B using the points405illustrated inFIG.4. These line segments505may be portions of boundaries of objects in the environment400such as shelving, walls, pallets, furniture, and the like. While the points405and the line segments505inFIGS.4and5are straight, the embodiments herein are not limited to detecting objects with straight surfaces. Although generating line segments may work best for mapping objects with straight lines, the embodiments herein can still identify boundaries that have curved sides. In that case, the line generator may subdivide the curved surface into a plurality of smaller, straight line segments which approximate the curved side. Returning to method300, at block315the line generator generates a second depth cloud containing a plurality of points when the robot is at the second location. That is, after gathering range data at a first location for performing blocks305and310, the robot can move to the second location and gather range data for performing block315. After generating the second point cloud using the updated range data, at block320, the line generator identifies the second set of line segments using the second point cloud. In one embodiment, the line generator can use a similar or same clustering technique to identify both the first and second sets of line segments. In one embodiment, the translations (e.g., the location of the line segments in the environment) and the orientation of the line segments in the first and second sets depends on the odometry data identified by the movement system. For example, when moving from the first location to the second location, an internal navigation system may track the wheel rotation to identify the distance moved by the robot, or use acceleration data provided by an IMU to determine the facing direction or orientation of the robot. Based on this odometry data, the line generator can determine the translation (i.e., the location of the line segment) and the orientation of the line segments on the floor of the environment. However, due to wheel slips or imperfection in the data generated by the IMU, the translation and orientation of the line segments may be off from what is expected. That is, line segments in the first and second sets that correspond to the same object may have different locations and orientations in the environment. Thus, it can be unclear whether the lines segments in the first and second sets correspond to the same object or different object. This error is illustrated inFIG.6. FIG.6illustrates identifying line segments representing objects when the robot is at two different locations, according to various embodiments. The line segments605A and605B are line segments identified when the robot is at Position A. In contrast, the line segments605C and605D are identified when the robot is at Position B after moving along the path610from Position A. For example, the robot may gather new range data each time the movement system indicates the robot has moved a predefined distance (e.g., every six inches or two feet) or at predefined times (e.g., every four seconds). In this example, the line segments605A and605C correspond to the same side of a first object while the line segments605B and605D correspond to the same side of a second object. As such, ideally the line segments605A and605C would have the same orientation in the environment and could have overlapping portions (depending on the distance between Position A and Position B). Similarly, the line segments605B and605D should have the same orientation in the environment and could have overlapping portions. However, imperfections in the location and orientation of the robot in the odometry data results in the lines segments605C and605D having different orientations than the orientations of the line segments605A and605B. As discussed below inFIG.8, the line matcher can perform a matching and merging process to compensate for this error and adjust the locations and orientations of the line segments605so the corresponding segments at least have the same orientation and may have overlapping portions. Returning to method300, at block325the line generator estimates segment uncertainty by identifying a covariance at the endpoints of the segments in the first and second sets of line segments. In one embodiment, the points in the cluster used to generate the line segments at blocks305and315have uncertainty resulting from inaccuracies in the distance measurements generated by the range sensor. For example, a depth sensor or Lidar sensor may generate distance measurements that may be off by +/−1-2 centimeters. As such, because the clustering algorithms use these points to generate the line segments, the uncertainty in the can lead to inaccuracies in the location and orientation of the line segments. FIG.7illustrates estimating the uncertainty of the lines segments, according to various embodiments. Specifically,FIG.7illustrates forming line segments using the points705derived from data generated by a range sensor. For example, using a clustering algorithm, the line generator may have determined that the points705correspond to the same object, and thus, should be grouped as the same line. As mentioned above, the location of the points705may be uncertain due to inaccuracies in the range sensor. This uncertainty is modeled by ellipses710surrounding the points705. That is, the ellipses710indicate a region centered at the points705which indicate where the actual location of the object should be. Thus, the actual point or location of the object can be located within the ellipses710. In one embodiment, the line matcher identifies outliers from the set of points which may be due to obvious error (such as reflections or backscattering) that provide a false location of the object. The line generator can use any technique to identify an outlier (or multiple outliers) from the set of points705. These outlier points can be discarded. For simplicity,FIG.7illustrates only inlier points705. In one embodiment, the line generator derives different line segments by selecting different locations of the points within the uncertainty ellipses710.FIG.7illustrates performing1-kiterations using the uncertainty ellipses710to determine different line segments. For example, the line segment715A is derived from the actual or original points705. For example, the line generator can use a fitting algorithm to identify the line segment715A which best fits the locations of the points705. The line segment715B illustrates selecting different points720within the uncertainty ellipses710. For example. The points720may be randomly selected from within the ellipses710. Using these points720, the line generator can again identify the line segment715B which best fits the points720. The original line segment715A, points705, and ellipses710are shown as dotted lines. The line segment715C illustrates selecting different points725during a kthiteration of the line fitting algorithm where again the original line segment715A, points705, and ellipses710are shown as dotted lines. In this example, the line generator again selects different points730within the ellipses and identifies the line segment715C which best fits these points730. Because the points at each iteration may be different, the resulting location and orientation of the line segments715may differ each time the fitting algorithm is performed. In this example, the line segment715D (e.g., a representative line segment) has the same location and orientation as the line segment715A derived from the original inlier points705. However, in another example, the location and orientation of the line segment715D is derived from the line segments identified during the 1−kthiterations. For example, the location of orientation of the line segment715D may be the average of the locations of the line segments identified during the 1−kthiterations. In any case, the line generator uses the locations and orientations of the line segments derived from the 1−kthiterations to identify estimated covariance735of the endpoints740of the line segment715D. In one embodiment, the estimated covariance735is identified by tracking the locations of the endpoints of the line segments715derived from the 1−k iterations. In this example, the covariance735is an ellipse which indicates the possible endpoints of the line segment715D. Thus, rather than tracking the uncertainty ellipses710corresponding to each point705, the uncertainty of the line segment715D can be represented by the covariance735at the endpoints740. In one embodiment, each line segment in the first and second sets derived at blocks305and315is represented by the covariance735at the endpoints740A and740B. The covariance735can represent the various possible locations and orientations of the line segment715D which then can be used to match line segments from the first and second sets as described below. FIG.8is a flowchart of a method800for merging line segments identified when the robot is at two different locations, according to various embodiments. At block805, the line matcher identifies the locations and orientations of first and second sets of line segments using odometry data. As described above, the robot may use an internal navigation system (e.g., the IMU or wheel turns) to determine, at least in part, the location and orientation of the first and second sets of line segments. As shown inFIG.6, although a line segment in the first set and a line segment in the second set may correspond to the same object, they may have locations and orientations that make it unclear whether or not they do represent the boundary of the same object. At block810, the line matcher corrects the odometry data. To do so, at block815, the line matcher uses an ICV technique or algorithm to calculate translation costs between each line segment in the first segment with each line segment in the second set. Put differently, the line matcher determines a location difference between the line segments in the first and second sets. This location or translation difference may be computed based on an average location of the segments or by some other means. In one embodiment, the smaller the distance, the more likely it is that the two segments being compared in the two sets are the same edge of an object. At block820, the line matcher uses the ICV technique to calculate rotation costs between each line segment in the first segment with each line segment in the second set. That is, the line matcher determines how much the orientation of a line segment in the first set differs from the orientation of the line segments in the second set, or how much one of the segments should be rotated in order to have the same orientation as the other segment. In one embodiment, the smaller the difference in orientation, the more likely it is two segments in the two sets are the same edge of an object. At block825, the line matcher identifies segments in the first set that matches segments in the second set. That is, the line matcher uses the translation and orientation costs to determine whether two line segments in the first and second set match, and thus, correspond to the same object edge. Equation 1 illustrate an example ICV algorithm that can be used to identify matching line segments: arg⁢min⁢∑i=1M⁢KT(nˆwi(A⁢vic⁢m-wi1))+KR(A⁢v→·nˆwi)(1) In Equation 1, the value {circumflex over (n)}wi(Avicm−wi1) represents the translation cost between two line segments while the value A{right arrow over (v)}·{circumflex over (n)}wirepresents the rotation cost between those segments. The values KTand KRare constant values associated with calculating the translation and rotation costs. The line matcher may determine that two lines match if they have the lowest translation and rotation costs. That is, the line matcher may use Equation 1 to generate an overall cost which compares a first line segment in the first set to all the line segments in the second set. The line segment in the second set that results in the smallest overall cost is deemed as the match to the first line segment. Equation 1, however, is just one example of an ICV algorithm for identifying whether two line segments (which were generated using data captured when the robot was at two different locations) correspond to the same object. FIG.9illustrates translation and orientation errors between line segments when the robot is at two different locations, according to various embodiments. Specifically,FIG.6includes line segments905generated when the robot105is at Location A in the environment and line segments910are generated when the robot105is at Location B. Because of the error and inaccuracies corresponding to the internal navigation system of the robot105, the line segments905A and910A which correspond to the same edge or surface of a physical object and the line segments905B and910B which correspond to the same edge or surface of a physical object are misaligned. Nonetheless, using the portion of the method800described above, the line merger can determine that the line segment905A should be merged with the line segment910A and the line segment905B should be merged with the line segment910B. The misalignment between the line segments905A and910A is illustrated by error915. The error915can include a misalignment in the translation or locations of the line segments905A and910A and the misalignment in the orientations of these line segments. The misalignment between the line segments905B and910B is illustrated by error920and can include a misalignment in the translation or locations of the line segments905B and910B and the misalignment in the orientations of these line segments. Returning to the method800, at block830the line matcher merges the matching line segments in the first and second set. That is, the line matcher can reorient and/or move the line segments so that the matched line segments now have the same orientation. Further, depending on how far the robot moved when generating the first and second sets, a portion of the matching line segments may overlap. FIG.10illustrates merging the line segments identified when the robot is at two different locations, according to various embodiments. As shown, the line matcher corrects the errors915and920inFIG.9by merging the line segment905A and910A and merging the line segment905B and910B. AlthoughFIG.10illustrates that the merged line segments at least partially overlap, this is not a requirement. Depending on how much the robot105moves between Location B and Location A, the merged line segments might not overlap but nonetheless correspond to the same edge or surface. In one embodiment, the line matcher uses the covariance at the endpoints of the line segments when merging them to form a single line segment or vector.FIG.11illustrates merging two line segments using their covariance, according to various embodiments. As shown, the endpoints1110of the line segments1105each include an estimated covariance1115which may be derived using the embodiments described above. Before merging the line segments1105, the line matcher may first determine whether the covariance1115of one segment overlaps with the location of the other line segment. InFIG.11, the covariance1115for the endpoint1110A of the line segment1105A overlaps the line segment1105B. Similarly, the covariance1115for the endpoint1110D for the line segment1105B overlaps the line segment1105A. This overlapping can indicate that location and orientation of the line segments1105A and1105B are within the error range of the sensor used to generate the range data. Put differently, the line segments1105likely correspond to the same object, but due to error in the range data and the odometry data, they may have different locations or orientations. In one embodiment, the covariance1115of the endpoints1110may be used along with the translation and rotation cost to match and merge the line segments. For example, after identifying the segments in the first and second sets with the smallest translation and rotation costs, the line matcher may merge them into a single line segment only if the covariance1115of at least one endpoint1110of one line segments overlaps the other line segment. However, in another embodiment, the line matcher may match the line segments in the first and second sets solely using the covariance1115of the endpoints1110(e.g., without calculating the translation and orientation costs). At block835, the line matcher stores scatter matrices representing the merged segments. As discussed above, an occupancy grid requires a large amount of data, especially as the size of the environment increases and the size of each block in the grid decreases. However, storing representations of merged line segments as scatter matrices can result in a large reduction in memory utilized when mapping the same environment. In one embodiment, the line matcher generates a scatter matrix for each line segment in the vector map of the environment. In one embodiment, the line matcher encodes the center of mass of the line segment, a mass of the line segment, and an orientation of the line segment. In one embodiment, the center of mass represents the covariance (or the uncertainty) at each endpoint of the line segment. As illustrated above, the covariance can have an elliptical shape which illustrates possible locations of the endpoints of the line segment. The mass of the line segment can represent the number of observations (or range data points) used to form the line segment. The orientation is the orientation of the line segment on the floor of the environment. By storing this information in a matrix for each line segment in the vector map, the map can use significantly less memory than an occupancy grid. FIG.12is a block diagram of a computing system1200for performing a machine learning clustering algorithm, according to various embodiments. The computing system1200can include a single computer or multiple interconnected computers (e.g., a data center). In one embodiment, the computing system1200may be part of the robot105illustrated inFIG.1and used to cluster range data points1215collected by the range sensor on the robot105. However, the embodiments herein are not limited to being used by a robot for mapping environments. In other embodiments, the computing system1200is used in computer vision system to form depth images of an environment which can be used in navigation, video games (e.g., to generate virtual reality environment or an augmented reality environment), to identify motion in the environment, to identify specific objects in the environment for a computer vision system (e.g., a rocking chair versus a sofa), and the like. The computing system1200includes a processor1205which represents any number of processing elements which can include any number of cores and a memory1210which can include volatile and non-volatile memory. The memory1210stores a machine learning (ML) clustering module1220which can be an application or a software module which uses range data points1215collected in an environment to generate a hierarchical cluster1225. Generally, the hierarchical cluster1225includes multiple levels where each level contains features representing one or more real-world objects. For example, the first level (e.g., Level A) can include the range data points1215which correspond to points along edges of physical objects in the environment, the second level (e.g., Level B) can include lines that correspond to the edges of the physical object, the third level (e.g., Level C) can include interconnected lines forming boundaries of the physical objects, and the fourth level (e.g., Level D) can define the objects with an enclosure such as a room in the environment. While the embodiments below describe a hierarchical cluster1225with four levels, the cluster1225can have fewer levels or more levels depending on the application. The ML clustering module1220includes an uncertainty calculator1230and a distance calculator1235. The uncertainty calculator1230identifies an uncertainty of each feature at each level of the cluster1225. For example, due to error or inaccuracies in the range sensor, the range data points1215may have uncertainty regions which indicate an area where the range data points1215may be located. An example of these uncertainty regions are shown inFIG.7by the ellipses710surrounding the points705. The ellipses710indicate an uncertainty region centered at the points705which indicate where the actual location of the object should be. Thus, the actual point of the object is likely to be somewhere within the ellipses710. The uncertainty calculator1230can use the uncertainty of the range data points1215to determine uncertainty regions for the features in the higher-levels of the hierarchical cluster1225. That is, after clustering the range data points1215to form a line, the uncertainty calculator can determine an uncertainty region for the location and orientation of the line in the environment. Put differently, the uncertainty calculator1230can determine an area where the line is most likely located. The distance calculator1235calculates a distance between features in the levels of the cluster1225. For example, the distance calculator1235can determine the distance between range data points1215in the first level of the cluster1225, the distance between lines in the second level, and the distance between objects in the third level. The ML clustering module1220determines whether to group features to a form higher-level feature using both the uncertainties determined by the uncertainty calculator1230and the distances determined by the distance calculator1235. That is, unlike other clustering algorithms that rely solely on distance to determine whether to cluster the features in one level to form other features in a higher level, the ML clustering module1220uses uncertainties regarding the location and/or the orientation of the features to determine whether the features should be grouped together in an upper level. Put differently, other ML clustering techniques do not consider the inaccuracies of the sensors used to generate the location/orientation information. By consider the uncertainties introduced by these errors, the ML clustering module1220can more accurately model the physical environment when clustering. Stated differently, the variance or uncertainty of the features provides a more accurate physical model of the environment. In one embodiment, the ML algorithm used by the ML clustering module1220is an unsupervised hierarchical agglomerative clustering algorithm which does not rely on training data to cluster the features at different levels. FIG.13illustrates using range data to generate the hierarchical cluster1225, according to various embodiments. As shown, the cluster1225includes four levels: Level A, Level B, Level C, and Level D. Level A is the lowest level in the hierarchy and includes the range data points1215. As mentioned above, range data points1215can correspond to points along an edge or surface of a physical object. As described in more detail below, the ML clustering module can cluster a set of the range data points1215in Level A to form a line1305in Level B. Each line1305corresponds to a portion of an edge of a physical object. However, at Level B, the ML clustering module does not know if the lines1305correspond to the same object or different objects. At Level C, the clustering module clusters together lines1305from Level B to form outlines or boundaries1310of physical objects. The boundaries1310can outline a bottom surface of the object or a side surface of the object (e.g., the side the sensor is facing that generates the range data points1215). At Level D, the clustering module identifies objects in Level C that can be clustered together into the same enclosure1315in the environment. In this example, two of the objects identified in Level C are clustered into Room A. For example, an environment (e.g., a house, business, or warehouse) may have multiple rooms and Level D can indicate which objects are located in which rooms (or enclosures1315) within the environment. However, not all of the features in each level in the cluster1225can be clustered to form a higher-level feature. InFIG.13, Level A includes non-clusterable points1320which do not satisfy the clustering criteria. That is, using the distance between the points1320and their associated uncertainty regions, the ML clustering module determines that the points1320are non-clusterable. This means the ML cluster module determines the points1320are not part of the same edge or surface. Similarly, the cluster1225includes non-clusterable lines1325. These lines1325include range data points1215that were clusterable when moving from Level A to Level B, but the distances between the lines and their associated uncertainties means the ML clustering algorithm does not cluster the lines1325to form a boundary1310for the same object. Stated differently, the ML clustering module determines that the non-clusterable lines1325are edges of different objects, and thus, should not be clustered or grouped together in the next level—i.e., Level C. FIG.14is a flowchart of a method1400for generating a hierarchical cluster, according to various embodiments. At block1405, the ML clustering module determines, using distance and uncertainty, whether multiple points can be clustered into a line. For clarity, the method1400is discussed in tandem with the hierarchical cluster1225illustrated inFIG.13. As shown there, the cluster module evaluates the range data points1215to identify a set of points1215that should be grouped together to form a higher-level feature in the next level of hierarchy—e.g., a line1305. In one embodiment, the range data points1215may be identified by generating 2D or 3D point clouds from the range data generated by range sensor. Although the range sensor may be mounted on a robot, this is not a requirement. In other examples, the range sensor may be part of a video game system, a computer vision system, and the like. InFIG.13, the clustering module determines there are six sets of points1215that can be clustered to form lines while three of the points—i.e., the non-clusterable points1320—should not. As described below, the ML clustering module can use the distance and uncertainty regions associated with the points1215to determine which point1215should be clustered into higher-level features and which should not. At block1410, the ML clustering module determines, using distance and uncertainty, whether multiple lines can be clustered to form a boundary of an object. That is, the endpoints of the lines can be connected to form a shape of a surface of the object such as the object's footprint in the floor of environment or a side surface of the object. InFIG.13, the left two lines in the Level B are clustered together to form two sides of a boundary1310in Level C. The other two sides of the boundary1310may be formed from other lines in Level B which are not shown. Another two of the lines1305in Level B are clustered together to form two sides of the right boundary1310in Level C. Like when clustering the range data points1215, the ML clustering module can use distance and uncertainty to determine when lines1305should not be clustered together into an object. InFIG.13, the right two lines1305in Level B are not clustered. That is, the ML clustering module determines that these lines1305define edges in different physical objects, and thus, should not be clustered together to form a boundary in Level C. At block1415, the ML clustering module determines, using distance and uncertainty, whether multiple objects can be clustered into a room or enclosure in the environment. For example, the environment may contain multiple rooms or portions which can be divided by actual walls or pre-defined boundaries (which may not be defined by physical structures). InFIG.3, the distances and uncertainties associated with the objects defined by the boundaries1310are such that the clustering module determines they are in the same enclosure1315—i.e., Room A. Thus, the objects in Level C are grouped together into the enclosure1315in Level D. Although not shown, objects in Level C may be spread far enough apart (or have large enough uncertainties) that the clustering module does not cluster these objects into the same enclosure with other objects. In that case, the objects may not be clustered thereby indicating they are in their own enclosures or that the clustering algorithm is unable to definitively determine which enclosure the object is in. FIG.15is a flowchart of a method1500for grouping features into higher-level features, according to various embodiments. In one embodiment, the method1500is repeated when clustering features at each level in the hierarchical cluster. For example, the method1500may be used to cluster the range data points in Level A to form the lines in Level B and then used again to cluster the lines in Level B to form the object boundaries in Level C, and so forth. At block1505, the distance calculator calculates the distance between the features in the current level of clustering. Using the range data points as an example, the distance calculator can calculate the distance between each point from every other point in the range data points. The points that correspond to an edge or surface on the same physical object are generally closer than points that correspond to an edge or surface on a different physical object. When clustering lines, the distance calculator may determine the distances between the midpoints in the lines. When clustering object boundaries, the distance calculator may determine the distance from the center of one object boundary to the centers of the other object boundaries. However, these are only example of measuring the distances between lines and objects and other methods and techniques can be used to identify these distances. At block1510, the uncertainty calculator calculates a probability that the features are part of a same higher-level feature using the distance the uncertainty corresponding to the feature. That is, rather than relying solely on distance, the ML clustering module can use the uncertainty of the features—e.g., the variance in the location of the features due to errors or inaccuracies in the range data—to determine a probability that two features are part of the same high-level feature. For example, the distance calculator may determine that two range data points are within 2 cm of each other while the uncertainty calculator determines that the uncertainty region of the points (e.g., the ellipses illustrated inFIG.7) has a 2 cm diameter. The size of the uncertainty region may be determined from the specifications of the range sensor (e.g., the manufacturer states it has an accuracy of +/−1 cm) or by testing the range sensor in the environment. Given these distance and uncertainty values, the uncertainty calculates determines that there is a 90% likelihood that the two points are part of the same edge or surface. In another example, two range data points may be separated by 2 cm but the uncertainty region may be only 1 mm in diameter (e.g., the range sensor is more accurate). In this example, even though the separation distance is the same as in the previous example, the uncertainty calculator determines that there is a 10% likelihood that the two points are part of the same edge or surface. As such, by identifying and considering uncertainty in addition to distance between features, the ML clustering module can make more accurate decisions regarding whether lower-level features should be clustered together to form higher-level features. In one embodiment, the uncertainty calculator uses the uncertainties corresponding to the range data points to generate uncertainties for the higher-level features—e.g., the lines and object boundaries. For example, the uncertainty calculator can identify a respective covariance for the endpoints in the lines. The calculator can also determine an uncertainty in the location of the object boundaries from the uncertainty of the endpoints of the individual lines forming the boundaries. In this manner, the uncertainty of the range data can be used at each level to determine which features to cluster. At block1515, the ML clustering module determines whether the probability determined at block1510satisfies a threshold. For example, the ML clustering module may require that the probability be above 80% before the features are clustered. That is, for four points to be clustered into a line, the points may have to all have to have at least an 80% probability that they are part of the same object. In another example, rather than all of the points having to have a threshold probability when compared to all the other points, the ML clustering module may cluster points so long as at least one point has a probability of at least 80% that it is part of the same edge as at least one other point in the set of four points. In this example, some of the clustered points in the line may have a probability relative to other points in the line that is less than 80%. If the current feature does not have sufficient probability scores to be clustered, the method1500proceeds to block1520where the ML clustering module maintains the feature as a separate feature in the next level of the hierarchical cluster. Put differently, the feature is not combined with other features to form a higher-level feature in the next level. If the probability is satisfied, at block1525the ML clustering method groups the features into a higher-level feature in the next level. For example, a subset of range data points are clustered into a line or a subset of the lines is clustered into an object boundary. At block1530, the ML clustering module determines whether there is another level in the hierarchical cluster that should still be evaluated. That is, the clustering module determines whether the method1500has reached the top of the hierarchical cluster. If not, at block1535the ML clustering module proceeds to the next level and repeats the method1500. If so, the method1500ends. Once the hierarchical cluster is generated, the computing system can transmit the cluster to be used by a robotic navigation system, a video game system to generate a VR or AR environment, a computer vision system, and the like. Using the clustered data, these systems can identify objects in the environment, navigate through the environment, display virtual content, and the like. The descriptions of the various embodiments of the present invention have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. In the preceding, reference is made to embodiments presented in this disclosure. However, the scope of the present disclosure is not limited to specific described embodiments. Instead, any combination of the features and elements described herein, whether related to different embodiments or not, is contemplated to implement and practice contemplated embodiments. Furthermore, although embodiments disclosed herein may achieve advantages over other possible solutions or over the prior art, whether or not a particular advantage is achieved by a given embodiment is not limiting of the scope of the present disclosure. Thus, the aspects, features, embodiments and advantages described herein are merely illustrative and are not considered elements or limitations of the appended claims except where explicitly recited in a claim(s). Likewise, reference to “the invention” shall not be construed as a generalization of any inventive subject matter disclosed herein and shall not be considered to be an element or limitation of the appended claims except where explicitly recited in a claim(s). Aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” The present invention may be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the FIGS. illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the FIGS. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. While the foregoing is directed to embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.
56,922
11860279
DETAILED DESCRIPTION This patent document provides implementations and examples of an image sensing device and a photographing device including the image sensing device. Some implementations of the disclosed technology relate to sensing a distance to a target object by changing an operation mode. The disclosed technology provides various implementations of an image sensing device which can select an optimum Time of Flight (TOF) method based on a distance to a target object, and can thus sense the distance to the target object using the optimum TOF method. Reference will now be made in detail to the embodiments of the disclosed technology, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. FIG.1is a block diagram illustrating an example of a photographing device based on some implementations of the disclosed technology. Referring toFIG.1, the photographing device may refer to a device, for example, a digital still camera for capturing still images or a digital video camera for capturing moving images. For example, the photographing device may be implemented as a Digital Single Lens Reflex (DSLR) camera, a mirrorless camera, or a mobile phone (especially, a smartphone), and others. The photographing device may include a device having both a lens and an image pickup element such that the device can capture (or photograph) a target object and can thus create an image of the target object. The photographing device may include an image sensing device100and an image signal processor200. The image sensing device100may measure the distance to a target object using a Time of Flight (TOF) method to measure the time for the light to travel between the image sensing device100and the target object. The image sensing device100may include a light source10, a lens module20, a pixel array110, a first pixel driver labeled as “direct pixel driver120,” a second pixel driver labeled as “indirect pixel driver130,” a direct readout circuit140, an indirect readout circuit150, a timing controller160, and a light source driver170. The light source10may emit light to a target object1upon receiving a clock signal carried by a modulated light signal (MLS) from the light source driver170. The light source10may be a laser diode (LD) or a light emitting diode (LED) for emitting light (e.g., near infrared (NIR) light, infrared (IR) light or visible light) having a specific wavelength band, or may be any one of a Near Infrared Laser (NIR), a point light source, a monochromatic light source combined with a white lamp or a monochromator, and a combination of other laser sources. For example, the light source10may emit infrared light having a wavelength of 800 nm to 1000 nm. AlthoughFIG.1shows only one light source10for convenience of description, other implementations are also possible, and a plurality of light sources may also be arranged in the vicinity of the lens module20. The lens module20may collect light reflected from the target object1, and may allow the collected light to be focused onto pixels of the pixel array110. For example, the lens module20may include a focusing lens having a surface formed of glass or plastic or another cylindrical optical element having a surface formed of glass or plastic. The lens module20may include a single lens group of one or more lenses. The pixel array110may include a plurality of pixels (PXs) consecutively arranged in a two-dimensional (2D) matrix structure for capturing and detecting incident light for measuring distances. The pixels are arranged in a column direction and a row direction perpendicular to the column direction. Each pixel (PX) may convert incident light received through the lens module20into an electrical signal corresponding to the amount of incident light, and may thus output a pixel signal using the electrical signal. In implementations, the device can be configured so that the pixel signal may not indicate the color of the target object1, and may be a signal indicating the distance to the target object1. The pixel array110may include, in addition to the imaging pixels, a first pixel array112, “direct pixel array,” which includes sensing pixels called “direct pixels” which are capable of sensing the distance to the target object1using a first technique for measuring the TOF such as a direct TOF method as further explained below, and a second pixel array114, “indirect pixel array,” which includes sensing pixels called “indirect pixels” which are capable of sensing the distance to the target object1using a second technique for measuring the TOF different from the first technique, such as an indirect TOF method as further explained below. The two pixel arrays112and114performing the TOF measurement for determining the distance may have different TOF characteristics, e.g., the first TOF technique may have a longer effective measurement distance and a lower spatial resolution, and the second TOF technique may have a higher spatial resolution and a shorter effective measurement distance. The inclusion of two or more such different TOF sensing pixels enable the device to detect objects located both near and far from the image sensing device while allowing such different TOF sensing pixels to complement one another and to collectively provide the ability of sensing objects at varying distances. In operation, a control circuit is provided to select one of the two pixel arrays112and114to measure a distance to a target object based on the different distance measuring characteristics of the two pixel arrays112and114to optimize the performance of distance measurements. Referring toFIGS.1and10, the direct pixels1010may be arranged in a line sensor shape within the pixel array1005, such that the entire region including the direct pixels1010arranged in the line sensor shape may be smaller in size than the region including the indirect pixels1040. This is because the direct pixels1010are designed to have a relatively longer effective measurement distance and a relatively higher temporal resolution rather than a purpose of acquiring an accurate depth image. As a result, the direct pixels1010can recognize the presence or absence of the target object1in the object monitoring mode using the relatively longer effective measurement distance and the relatively higher temporal resolution, and at the same time can correctly measure the distance to the target object1using the relatively longer effective measurement distance and the relatively higher temporal resolution. As an example of the first technique for measuring TOF, the direct TOF method may directly measure a round trip time from a first time where pulse light is emitted to the target object1to a second time where pulse light reflected from the target object1is incident, and may thus calculate the distance to the target object1by using the round trip time and the speed of light. As an example of the second technique for measuring TOF, the indirect TOF method may emit light modulated by a predetermined frequency to the target object1, may sense modulated light that is reflected from the target object1, may calculate a phase difference between a clock signal MLS controlling the modulated light and a pixel signal generated from detecting the modulated light reflected back from the target object1, and may thus calculate the distance to the target object1based on the phase difference between the clock signal MLS and the pixel signal. Generally, whereas the direct TOF method may have advantages in that it has a relatively higher temporal resolution and a longer effective measurement distance, the direct TOF method may have disadvantages in that it has a relatively lower spatial resolution due to a one-to-one correspondence structure between each pixel and each readout circuit. The spatial resolution may be used to refer to the ability to discern a spatial difference. As each pixel is reduced in size, the spatial resolution may increase. Temporal resolution may be used to refer to the ability to discern a temporal difference. As time required by the pixel array110for outputting a pixel signal corresponding to a single frame is shortened, the temporal resolution may increase. A time needed by each sensing pixel for measuring the TOF using the first or the second TOF measurement technique is referred to a unit sensing time. The power used during the unit sensing time by each sensing pixel for measuring the TOF is referred to as a unit power consumption. In some implementations in which the sensing pixel for measuring the TOF using the first technique is configured to receive a relatively high reverse bias voltage as will be described later, such sensing pixel may have a relatively higher unit power consumption than that of the sensing pixel measuring the TOF using the second technique. In some implementations, the direct pixel may be a single-photon avalanche diode (SPAD) pixel. The operation principles of the SPAD pixel are as follows. A reverse bias voltage may be applied to the SPAD pixel to increase an electric field, resulting in formation of a strong electric field. Subsequently, there may occur impact ionization in which electrons generated by photons that are incident by the strong electric field move from one place to another place to generate electron-hole pairs. Specifically, in the SPAD pixel configured to operate in a Geiger mode in which a reverse bias voltage higher than a breakdown voltage is received, carriers (electrons or holes) generated by incident light may collide with electrons and holes generated by the above impact ionization, such that a large number of carriers may be generated by such collision. Accordingly, although a single photon is incident upon the SPAD pixel, avalanche breakdown may be triggered by the single photon, resulting in formation of a measurable current pulse. A detailed structure and operations of the SPAD pixel will be described later with reference toFIG.4. In some implementations, the indirect pixel may be a circulation pixel. In the circulation pixel, a first operation of moving, in a predetermined direction (e.g., a clockwise or counterclockwise direction), of photocharges generated by a photoelectric conversion element in response to reflected light and a second operation of transferring of photocharges collected by such movement to a plurality of floating diffusion (FD) regions can be performed separately from each other. For example, each circulation pixel may include a plurality of circulation gates and a plurality of transfer gates that surround the photoelectric conversion element. Potential of circulation gates and potential of transfer gates may be changed while being circulated in a predetermined direction. Photocharges generated by the photoelectric conversion element may move and transfer in a predetermined direction by a change in circulation potential between the circulation gates and the transfer gates. As described above, movement of photocharges and transfer of photocharges may be performed separately from each other, such that a time delay based on the distance to the target object1can be more effectively detected. A detailed structure and operations of the circulation pixel will be described later with reference toFIGS.5to8. In addition, photocharges mentioned in the disclosed technology may be photoelectrons. The direct pixel driver120may drive the direct pixel array112of the pixel array110in response to a control signal from the timing controller160. For example, the direct pixel driver120may generate a quenching control signal to control a quenching operation for reducing a reverse bias voltage applied to the SPAD pixel to a breakdown voltage or less. In addition, the direct pixel driver120may generate a recharging control signal for implanting charges into a sensing node connected to the SPAD pixel. The indirect pixel driver130may drive the indirect pixel array114of the pixel array110in response to a control signal from the timing controller160. For example, the indirect pixel driver130may generate a circulation control signal, a transfer control signal, a reset control signal, and a selection control signal. In more detail, the circulation control signal may control movement of photocharges within a photoelectric conversion element of each pixel. The transfer control signal may allow moved photocharges to be sequentially transferred to the floating diffusion (FD) regions. The reset control signal may initialize each pixel. The selection control signal may control output of an electrical signal corresponding to a voltage of the floating diffusion (FD) regions. The direct readout circuit140may be disposed at one side of the pixel array110, may calculate a time delay between a pulse signal generated from each pixel of the direct pixel array112and a reference pulse, and may generate and store digital data corresponding to the time delay. The direct readout circuit140may include a time-to-digital circuit (TDC) configured to perform the above-mentioned operation. The direct readout circuit140may transmit the stored digital data to the image signal processor200under control of the timing controller160. The indirect readout circuit150may process an analog pixel signal generated from each pixel of the indirect pixel array114, and may thus generate and store digital data corresponding to the pixel signal. For example, the indirect readout circuit150may include a correlated double sampler (CDS) circuit for performing correlated double sampling on the pixel signal, an analog-to-digital converter (ADC) circuit for converting an output signal of the CDS circuit into digital data, and an output buffer for temporarily storing the digital data. The indirect readout circuit150may transmit the stored digital data to the image signal processor200under control of the timing controller160. The timing controller160may control overall operation of the image sensing device100. Thus, the timing controller160may generate a timing signal to control operations of the direct pixel driver120, the indirect pixel driver130, and the light source driver170. In addition, the timing controller160may control activation or deactivation of each of the direct readout circuit140and the indirect readout circuit150, and may control digital data stored in the direct readout circuit140and digital data stored in the indirect readout circuit150to be simultaneously or sequentially transmitted to the image signal processor200. Specifically, the timing controller160may selectively activate or deactivate the direct pixel array112, the direct pixel driver120, and the direct readout circuit140under control of the image signal processor200, or may selectively activate or deactivate the indirect pixel array114, the indirect pixel driver130, and the indirect readout circuit150under control of the image signal processor200. Operations for each mode of the image sensing device100will be described later with reference toFIGS.2and3. The light source driver170may generate a clock signal carried by a modulated light signal (MLS) capable of driving the light source10in response to a control signal from the timing controller160. The image signal processor200may process digital data received from the image sensing device100, and may generate a depth image indicating the distance to the target object1. Specifically, the image signal processor200may calculate the distance to the target object1for each pixel in response to a time delay denoted by digital data received from the direct readout circuit140. In addition, the image signal processor200may calculate the distance to the target object1for each pixel in response to a phase difference denoted by digital data received from the indirect readout circuit150. The image signal processor200may control operations of the image sensing device100. Specifically, the image signal processor200may analyze (or resolve) digital data received from the image sensing device100, may decide a mode of the image sensing device100based on the analyzed result, and may control the image sensing device100to operate in the decided mode. The image signal processor200may perform image signal processing of the depth image such that the image signal processor200may perform noise cancellation and image quality improvement of the depth image. The depth image generated from the image signal processor200may be stored in an internal memory of a photographing device, or a device including the photographing device or in an external memory either in response to a user request or in an automatic manner, such that the stored depth image can be displayed through a display. Alternatively, the depth image generated from the image signal processor200may be used to control operations of the photographing device or the device including the photographing device. FIG.2is a diagram illustrating an example of operations for each mode of the image sensing device100shown inFIG.1based on some implementations of the disclosed technology. Referring toFIG.2, the photographing device may be embedded in various kinds of devices, for example, a mobile device such as a smartphone, a transportation device such as a vehicle, a surveillance device such as a closed circuit television (CCTV), and the others. For convenience of description and better understanding of the disclosed technology, it is assumed that the photographing device shown inFIG.1is embedded in a vehicle300. The vehicle300including the photographing device will hereinafter be referred to as a host vehicle for convenience of description. The image sensing device100embedded in the host vehicle300may sense the distance to the target object1using the direct pixel array112according to the direct TOF method, or may sense the distance to the target object1using the indirect pixel array114according to the indirect TOF method. As previously stated above, the direct TOF method may have a longer effective measurement distance and a lower spatial resolution, and the indirect TOF method may have a higher spatial resolution and a shorter effective measurement distance. Therefore, a first range within which the direct pixel array112can effectively measure the distance to the target object1(for example, at a valid reliability level corresponding to a predetermined reliability or greater) will hereinafter be denoted by a first effective measurement region (EMA1), and a second range within which the indirect pixel array114can effectively measure the distance to the target object1(for example, at a valid reliability level corresponding to a predetermined reliability or greater) will hereinafter be denoted by a second effective measurement region (EMA2). In this case, the effective measurement distance may refer to a maximum length in which the direct pixel array112or the indirect pixel array114can effectively sense the distance to the target object1at a certain reliability level that is equal to or greater than a predetermined reliability threshold. Here, the effective measurement distance of the direct pixel may be longer than that of the indirect pixel. As can be seen fromFIG.2, a Field of View (FOV) of the first effective measurement region EMA1 may be less than that of the second effective measurement region EMA2. Operations of the image sensing device100based on the direct TOF method are as follows. In accordance with the direct TOF method, each pixel generates a pulse signal when incident light is sensed and as soon as the pulse signal is generated, the readout circuit generates digital data indicative of time of flight (TOF) by converting generation time of the pulse signal into digital data indicating a time of flight (TOF), and then stores the digital data. Each pixel is configured to generate a pulse signal by sensing incident light without the capability to store information, and thus the readout circuit is needed to store information needed for distance calculation. As a result, a readout circuit is needed for each pixel. For example, the readout circuit may be included in each pixel. However, if the array is configured with the plurality of pixels, each including the readout circuit, each pixel may have unavoidable increase in size due to the readout circuit. In addition, since an overall size for a region allocated to the array is restricted, it may be difficult to increase the number of pixels to be included in the array. Therefore, in some implementations of the disclosed technology, the readout circuit may be located outside the pixel array such that as many circuits as possible can be included in the pixel array. In some implementations, the array including direct pixels may be formed in an X-shape or a cross-shape such that the readout circuit and the direct pixel may be arranged to correspond to each other on a one to one basis. The above-mentioned operation method may be referred to as a line scanning method. When the readout circuit is located outside the pixel array, even if direct pixels are included in the same row or same column of the pixel array, the direct pixels are not simultaneously activated and only one of the direct pixels on the same row or the same column can be activated. Operations of the image sensing device100based on the indirect TOF method are as follows. In accordance with the indirect TOF method, each pixel may accumulate photocharges corresponding to the intensity of incident light, and the readout circuit may convert a pixel signal corresponding to the photocharges accumulated in each pixel into digital data and then store the digital data. Each pixel can store information needed for distance calculation using photocharges without the readout circuit. As a result, pixels can share the readout circuit, and indirect pixels contained in the array including the indirect pixels can be simultaneously driven. The above-mentioned operation method may be referred to as as an area scanning method. Therefore, the number of pixels that are simultaneously driven when using the line scanning method is relatively smaller than that when using the area scanning method. Thus, a field of view (FOV) of the first effective measurement region EMA1 of the array including direct pixels driven by the line scanning method may be less than an FOV of the second effective measurement region EMA2 of the array including indirect pixels driven by the area scanning method. Referring back toFIG.2, within the range L16 from the host vehicle300, the direct pixel array112can effectively measure the distance to the target object1. Thus, the range within which the distance to the host vehicle300is denoted by L16 or less will hereinafter be defined as a direct TOF zone. Within the range L4 from the host vehicle300, the indirect pixel array114can effectively measure the distance to the target object1. Thus, the range within which the distance to the host vehicle300is denoted by L4 or less will hereinafter be defined as an indirect TOF zone. Each of L0 to L16 may correspond to a value indicating a specific distance, and the spacing between Ln (where “n” is any one of 0 to 15) and L(n+1) may be constant. The length of the direct TOF zone may be four times the length of the indirect TOF zone. The range and the length of the direct TOF zone or the indirect TOF zone as discussed above are examples only and other implementations are also possible. As can be seen fromFIG.2, it is assumed that first to fourth vehicles VH1˜VH4 are respectively located at four different positions in a forward direction of the host vehicle300. Since the first to fourth vehicles VH1˜VH4 are included in the direct TOF zone, the distance between the host vehicle300and each of the vehicles VH1˜VH4 can be sensed using the direct TOF method. However, since the first to third vehicles VH1˜VH3 are not included in the indirect TOF zone, the distance between the host vehicle300and each of the vehicles VH1˜VH3 cannot be sensed using the indirect TOF method. Thus, each of the first to third vehicles VH1˜VH3 may sense the distance to the host vehicle300using the direct TOF method only. Meanwhile, since the fourth vehicle VH4 may be included in the direct TOF zone and in the indirect TOF zone, the fourth vehicle VH4 may sense the distance to the host vehicle300using the direct TOF method or the indirect TOF method. A forward region of the host vehicle300may be classified into a hot zone and a monitoring zone based on the distance to the host vehicle300. The hot zone may correspond to an area distanced from the host vehicle300by the distance that is equal to or shorter than a threshold distance (e.g., L4). In the hot zone, the distance to a target object is relatively short and thus the sensing of the position of the target object in the hot zone requires high level of accuracy. The monitoring zone may correspond to an area distanced from the host vehicle300by the distance that is longer than a threshold value (e.g., L4). In the monitoring zone, since the distance to a target object is relatively long, the sensing of an existence of the target object in a forward region (e.g., the presence or an absence of the target object) is required while the sensing of the position of the target object in the monitoring does not require that high level of accuracy. In more detail, in the hot zone, a method for sensing the distance to a target object using the indirect TOF method having a higher spatial resolution may be considered more advantageous. In the monitoring zone, a method for sensing the distance to a target object using the direct TOF method having a longer effective measurement distance may be considered more advantageous. For example, the distance to each of the first to third vehicles VH1˜VH3 may be more advantageously sensed using the direct TOF method, and the distance to the fourth vehicle VH4 may be more advantageously sensed using the indirect TOF method. As can be seen fromFIG.2, the distance to the first vehicle VH1 may be denoted by L13, the distance to the second vehicle VH2 may be denoted by L9, the distance to the third vehicle VH3 may be denoted by L4, and the distance to the fourth vehicle VH4 may be denoted by L1. In some implementations, the hot zone may be identical to the indirect TOF zone, and the monitoring zone may refer to a region obtained by subtracting the indirect TOF zone from the direct TOF zone. In some other implementations, the hot zone may be larger or smaller than the indirect TOF zone. AlthoughFIG.2shows the exemplary case in which the photographing device is embedded in the vehicle as the example, other implementations are also possible, and the photographing device may be embedded in other devices. The method for selectively using the direct TOF method or the indirect TOF method in response to the distance to the target object can be applied to, for example, a face/iris recognition mode implemented by a wake-up function from among sleep-mode operations of a mobile phone, and can be applied to a surveillance mode for detecting the presence or absence of a target object using a CCTV, and a photographing mode for precisely photographing the target object. FIG.3is a flowchart illustrating an example of operations for each mode of the image sensing device100shown inFIG.1based on some implementations of the disclosed technology. Referring toFIGS.2and3, the image sensing device100may operate in an object monitoring mode or in a depth resolving mode under control of the image signal processor200. In the object monitoring mode, the direct pixel array112, the direct pixel driver120, and the direct readout circuit140may be activated, the indirect pixel array114, the indirect pixel driver130, and the indirect readout circuit150may be deactivated. In the depth resolving mode, the indirect pixel array114, the indirect pixel driver130, and the indirect readout circuit150may be activated, the direct pixel array112, the direct pixel driver120, and the direct readout circuit140may be deactivated. If the distance sensing operation of the image sensing device100is started, the image sensing device100operates in the object monitoring mode by default and generates digital data indicating the distance to a target object using the direct TOF method (step S10). The image sensing device100may transmit digital data generated from the direct pixel array112to the image signal processor200. The image signal processor200may calculate a distance to a target object based on the digital data, and may determine whether the calculated distance to the target object is equal to or shorter than a threshold distance for determining the range of a hot zone, such that the image signal processor200can thus determine whether the target object is detected in the hot zone (step S20). If the calculated distance to the target object is longer than the threshold distance (i.e., “No” in step S20), the image sensing device100may continuously operate in the object monitoring mode. For example, if the target object is any one of the first to third vehicles VH1˜VH3 shown inFIG.2, the image sensing device100may continuously operate in the object monitoring mode. If the calculated distance to the target object is equal to or shorter than the threshold distance (i.e., “Yes” in step S20), the image signal processor200may increase the counted resultant value stored in a mode counter embedded therein by a predetermined value (e.g., “1”). In addition, the image signal processor200may determine whether the counted resultant value stored in the mode counter is higher than a predetermined mode switching value K (where K is an integer) in step S30. If a predetermined time (or an initialization time) has elapsed, or if the operation mode of the image sensing device100switches from the object monitoring mode to the depth resolving mode, the counted resultant value may be initialized. Therefore, within the predetermined time (or the initialization time), the image signal processor200may determine whether a specific event in which the calculated distance to the target object is equal to or shorter than the threshold distance has occurred a predetermined number of times or more. As a result, an exemplary case in which the counted resultant value is unexpectedly changed due to erroneous detection, or an exemplary case in which the target object is temporarily located in the hot zone may be excluded. If the counted resultant value is equal to or less than a predetermined mode switching value K (i.e., “No” in step S30), the image sensing device100may continuously operate in the object monitoring mode. For example, if the target object has temporarily existed at the position of the fourth vehicle VH4 shown inFIG.2, or if erroneous detection has occurred, the image sensing device100may continuously operate in the object monitoring mode. If the counted resultant value is higher than the predetermined mode switching value K (i.e., “Yes” in step S30), the image signal processor200may allow the operation mode of the image sensing device100to switch from the object monitoring mode to the depth resolving mode. Accordingly, the image sensing device100may generate digital data indicating the distance to the target object using the indirect TOF method (step S40). On the other hand, the image signal processor200may perform switching of the operation mode of the image sensing device100, and may then initialize the counted resultant value. In addition, if the image signal processor200determines that the target object1is not present in the hot zone based on digital data received from the image sensing device100, the image signal processor200may finish the depth resolving mode. In this case, the image signal processor200may control the image sensing device100to re-perform step S10. Therefore, if the distance to the target object is equal to or shorter than the threshold distance (i.e., if the target object is located in the hot zone), the image sensing device100may sense the distance to the target object using the indirect TOF method (i.e., by activating the indirect pixel array114). If the distance to the target object is longer than the threshold value (i.e., if the target object is located in the monitoring zone), the image sensing device100may sense the distance to the target object using the direct TOF method (i.e., by activating the direct pixel array112). That is, an optimum operation mode can be selected according to the distance to the target object. In addition, in the object monitoring mode in which precise distance sensing need not be used, only some direct pixels from among the direct pixels may be activated, resulting in reduction in power consumption. Methods for activating the pixels included in the pixel array110during the respective operation modes will be described later with reference toFIGS.10to13. FIG.4is an equivalent circuit illustrating an example of a direct pixel DPX included in the direct pixel array112shown inFIG.1based on some implementations of the disclosed technology. The direct pixel array112may include a plurality of direct pixels (DPXs). Although it is assumed that each direct pixel (DPX) shown inFIG.4is a single-photon avalanche diode (SPAD) pixel for convenience of description, other implementations are also possible. The direct pixel (DPX) may include a single-photon avalanche diode (SPAD), a quenching circuit (QC), a digital buffer (DB), and a recharging circuit (RC). The SPAD may sense a single photon reflected by the target object1, and may thus generate a current pulse corresponding to the sensed single photon. The SPAD may be a photodiode provided with a photosensitive P-N junction. In the SPAD, avalanche breakdown may be triggered by a single photon received in a Geiger mode that receives a reverse bias voltage generated when a cathode-to-anode voltage is higher than a breakdown voltage, resulting in formation of a current pulse. As described above, the above-mentioned process for forming the current pulse through avalanche breakdown triggered by the single photon will hereinafter be referred to as an avalanche process. One terminal of the SPAD may receive a first bias voltage (Vov) for applying a reverse bias voltage (hereinafter referred to as an operation voltage) higher than a breakdown voltage to the SPAD. For example, the first bias voltage (Vov) may be a positive (+) voltage having an absolute value that is lower than an absolute value of a breakdown voltage. The other terminal of the SPAD may be coupled to a sensing node (Ns), and the SPAD may output a current pulse generated by sensing the single photon to the sensing node (Ns). The quenching circuit (QC) may control the reverse bias voltage applied to the SPAD. If a time period (or a predetermined time after pulses of the clock signal (MLS) have been generated) in which the avalanche process can be carried out has elapsed, a quenching transistor (QX) of the quenching circuit (QC) may be turned on in response to a quenching control signal (QCS) such that the sensing node (Ns) can be electrically coupled to a ground voltage. As a result, the reverse bias voltage applied to the SPAD may be reduced to a breakdown voltage or less, and the avalanche process may be quenched (or stopped). The digital buffer (DB) may perform sampling of an analog current pulse to be input to the sensing node (Ns), such that the digital buffer (DB) may convert the analog current pulse into a digital pulse signal. In this example, the sampling of the analog current pulse may be performed by converting the analog current pulse into the digital pulse signal having a logic level “0” or “1” based on a determination whether the level of a current pulse is equal to or higher than a threshold level. However, the sampling method is not limited to thereto and other implementations are also possible. Therefore, the pulse signal generated from the digital buffer (DB) may be denoted by a direct pixel output signal (DPXout), such that the pulse signal denoted by the direct pixel output signal (DPXout) can be transferred to the direct readout circuit140. After the avalanche process is quenched by the quenching circuit (QC), the recharging circuit (RC) may implant or provide charges into the sensing node (Ns) such that the SPAD can re-enter the Geiger mode in which avalanche breakdown can be induced. For example, the recharging circuit (RC) may include a switch (e.g., a transistor) that can selectively connect a second bias voltage to the sensing node (Ns) in response to a recharging control signal. If the switch is turned on, the voltage of the sensing nose (Ns) may reach the second bias voltage. For example, the sum of the absolute value of the second bias voltage and the absolute value of the first bias voltage may be higher than the absolute value of the breakdown voltage, and the second bias voltage may be a negative(−) voltage. Therefore, the SPAD may enter the Geiger mode, such that the SPAD may perform the avalanche process by the single photon received in a subsequent time. In the example, each of the quenching circuit (QC) and the recharging circuit (RC) is implemented as an active device, other implementations are also possible. Thus, in some implementations, each of the quenching circuit (QC) and the recharging circuit (RC) may also be implemented as a passive device. For example, the quenching transistor (QX) of the quenching circuit (QC) may also be replaced with a resistor. The quenching control signal (QCS) and the recharging control signal may be supplied from the direct pixel driver120shown inFIG.1. The direct readout circuit140may include a digital logic circuit configured to generate digital data by calculating a time delay between a pulse signal of the direct pixel (DPX) and a reference pulse, and an output buffer configured to store the generated digital data. The digital logic circuit and the output buffer may hereinafter be collectively referred to as a Time-to-Digital Circuit (TDC). In this case, the reference pulse may be a pulse of the clock signal (MLS). FIG.5is an equivalent circuit illustrating an example of the indirect pixel IPX included in the indirect pixel array114shown inFIG.1based on some implementations of the disclosed technology. The indirect pixel array114may include a plurality of indirect pixels (IPXs). Although it is assumed that each indirect pixel (IPX) shown inFIG.5is a circulation pixel for convenience of description, other implementations are also possible. The indirect pixel (IPX) may include a plurality of transfer transistors TX1˜TX4, a plurality of circulation transistors CX1˜CX4, and a plurality of pixel signal generation circuits PGC1˜PGC4. The photoelectric conversion element PD may perform photoelectric conversion of incident light reflected from the target object1, and may thus generate and accumulate photocharges. For example, the photoelectric conversion element PD may be implemented as a photodiode, a pinned photodiode, a photogate, a phototransistor or a combination thereof. One terminal of the photoelectric conversion element PD may be coupled to a substrate voltage (Vsub), and the other terminal of the photoelectric conversion element PD may be coupled to the plurality of transfer transistors TX1˜TX4 and the plurality of circulation transistors CX1˜CX4. In this case, the substrate voltage (Vsub) may be a voltage (for example, a ground voltage) that is applied to the substrate in which the photoelectric conversion element PD is formed. The transfer transistor TX1 may transfer photocharges stored in the photoelectric conversion element PD to the floating diffusion (FD) region FD1 in response to a transfer control signal TFv1. The transfer transistor TX2 may transfer photocharges stored in the photoelectric conversion element PD to the floating diffusion (FD) region FD2 in response to a transfer control signal TFv2. The transfer transistor TX3 may transfer photocharges stored in the photoelectric conversion element PD to the floating diffusion (FD) region FD3 in response to a transfer control signal TFv3. The transfer transistor TX4 may transfer photocharges stored in the photoelectric conversion element PD to the floating diffusion (FD) region FD4 in response to a transfer control signal TFv4. Each of the transfer control signals TFv1˜TFv4 may be received from the indirect pixel driver130. The circulation transistors CX1˜CX4 may be turned on or off in response to the circulation control signals CXV1˜CXV4. In more detail, the circulation transistor CX1 may be turned on or off in response to the circulation control signal CXV1, the circulation transistor CX2 may be turned on or off in response to the circulation control signal CXV2, the circulation transistor CX3 may be turned on or off in response to the circulation control signal CXV3, and the circulation transistor CX4 may be turned on or off in response to the circulation control signal CXV4. One terminal of each of the circulation transistors CX1˜CX4 may be coupled to the photoelectric conversion element PD, and the other terminal of each of the circulation transistors CX1˜CX4 may be coupled to a drain voltage (Vd). During a modulation period in which photocharges generated by the photoelectric conversion element PD are collected and transmitted to the floating diffusion (FD) regions FD1˜FD4, the drain voltage (Vd) may be at a low-voltage (e.g., a ground voltage) level. During a readout period after lapse of the modulation period, the drain voltage (Vd) may be at a high-voltage (e.g., a power-supply voltage) level. In addition, the circulation control signals CXV1˜CXV4 may respectively correspond to the circulation control voltages Vcir1˜Vcir4 (seeFIG.6) during the modulation period, such that each of the circulation transistors CX1˜CX4 may enable photocharges generated by the photoelectric conversion element PD to move in a predetermined direction (for example, in a counterclockwise direction). In addition, each of the circulation control signals CXV1˜CXV4 may correspond to a draining control voltage (Vdrain) (seeFIG.6) during the readout period, such that each of the circulation transistors CX1˜CX4 may fix a voltage level of the photoelectric conversion element PD to the drain voltage (Vd). Each of the circulation control signals CXV1˜CXV4 may be received from the indirect pixel driver130. The pixel signal generation circuits PGC1˜PGC4 may store photocharges transferred from the transfer transistors TX1˜TX4, and may output indirect pixel output signals IPXout1˜IPXout4 indicating electrical signals corresponding to the stored photocharges to the indirect readout circuit150. In more detail, the pixel signal generation circuit PGC1 may store photocharges transferred from the transfer transistor TX1, and may output an indirect pixel output signal IPXout1 indicating an electrical signal corresponding to the stored photocharges to the indirect readout circuit150. The pixel signal generation circuit PGC2 may store photocharges transferred from the transfer transistor TX2, and may output an indirect pixel output signal IPXout2 indicating an electrical signal corresponding to the stored photocharges to the indirect readout circuit150. The pixel signal generation circuit PGC3 may store photocharges transferred from the transfer transistor TX3, and may output an indirect pixel output signal IPXout3 indicating an electrical signal corresponding to the stored photocharges to the indirect readout circuit150. The pixel signal generation circuit PGC4 may store photocharges transferred from the transfer transistor TX4, and may output an indirect pixel output signal IPXout4 indicating an electrical signal corresponding to the stored photocharges to the indirect readout circuit150. In some implementations, the pixel signal generation circuits PGC1˜PGC4 may be simultaneously or sequentially operated. The indirect pixel output signals IPXout1˜IPXout4 may correspond to different phases, and the image signal processor200may calculate the distance to the target object1by calculating a phase difference in response to digital data generated from the indirect pixel output signals IPXout1˜IPXout4. The structures and operations of the pixel signal generation circuits PGC1˜PGC4 may be discussed later using the pixel signal generation circuit PGC1 as an example and such descriptions will be also considered for the remaining pixel signal generation circuits PGC2˜PGC4. Thus, redundant descriptions for the pixel signal generation circuits PGC2˜PGC4 will be omitted for brevity. The pixel signal generation circuit PGC1 may include a reset transistor RX1, a capacitor C1, a drive transistor DX1, and a selection transistor SX1. The reset transistor RX1 may be coupled between a reset voltage (Vr) and the floating diffusion (FD) region FD1, and may be turned on or off in response to a reset control signal RG1. For example, the reset voltage (Vr) may be a power-supply voltage. Whereas the turned-off reset transistor RX1 can sever electrical connection between the reset voltage (Vr) and the floating diffusion (FD) region FD1, the turn-on reset transistor RX1 can electrically connect the reset voltage (Vr) to the floating diffusion (FD) region FD1 such that the floating diffusion (FD) region FD1 can be reset to the reset voltage (Vr). The capacitor C1 may be coupled between the ground voltage and the floating diffusion (FD) region FD1, such that the capacitor C1 may provide electrostatic capacity in a manner that the floating diffusion (FD) region FD1 can accumulate photocharges received through the transfer transistor TX1. For example, the capacitor C1 may be implemented as a junction capacitor. The drive transistor DX1 may be coupled between the power-supply voltage (VDD) and the selection transistor SX1, and may generate an electrical signal corresponding to a voltage level of the floating diffusion (FD) region FD1 coupled to a gate terminal thereof. The selection transistor SX1 may be coupled between the drive transistor DX1 and an output signal line, and may be turned on or off in response to the selection control signal SEL1. When the selection transistor SX1 is turned off, the selection transistor SX1 may not output the electrical signal of the drive transistor DX1 to the output signal line, and when the selection transistor is turned-on, the selection transistor SX1 may output the electrical signal of the drive transistor DX1 to the output signal line. In this case, the output signal line may be a line through which the indirect pixel output signal (IPXout1) of the indirect pixel (IPX) is applied to the indirect readout circuit150, and other pixels belonging to the same column as the indirect pixel (IPX) may also output the indirect pixel output signals through the same output signal line. Each of the reset control signal RG1 and the selection control signal SEL1 may be provided from the indirect pixel driver130. FIG.6is a plan view600illustrating an example of the indirect pixel (IPX) shown inFIG.5based on some implementations of the disclosed technology. Referring toFIG.6, a plan view600illustrating some parts of the indirect pixel (IPX) is illustrated. The plan view600illustrating some parts of the indirect pixel (IPX) may include a photoelectric conversion element PD, a plurality of floating diffusion (FD) regions FD1˜FD4, a plurality of drain nodes D1˜D4, a plurality of transfer gates TG1˜TG4, and a plurality of circulation gates CG1˜CG4. The transfer gates TG1˜TG4 may respectively correspond to gates of the transfer transistors TX1˜TX4 shown inFIG.5. Thus, the transfer gate TG1 may correspond to a gate of the transfer transistor TX1, the transfer gate TG2 may correspond to a gate of the transfer transistor TX2, the transfer gate TG3 may correspond to a gate of the transfer transistor TX3, and the transfer gate TG4 may correspond to a gate of the transfer transistor TX4. In addition, the circulation gates CG1˜CG4 may respectively correspond to gates of the circulation transistors CX1˜CX4 shown inFIG.5. Thus, the circulation gate CG1 may correspond to a gate of the circulation transistor CX1, the circulation gate CG2 may correspond to a gate of the circulation transistor CX2, the circulation gate CG3 may correspond to a gate of the circulation transistor CX3, and the circulation gate CG4 may correspond to a gate of the circulation transistor CX4. In addition, the drain nodes D1˜D4 may respectively correspond to terminals of the circulation transistors CX1˜CX4 each receiving the drain voltage (Vd) as an input. In more detail, the drain node D1 may correspond to a terminal of the circulation transistor CX1 receiving the drain voltage (Vd), the drain node D2 may correspond to a terminal of the circulation transistor CX2 receiving the drain voltage (Vd), the drain node D3 may correspond to a terminal of the circulation transistor CX3 receiving the drain voltage (Vd), and the drain node D4 may correspond to a terminal of the circulation transistor CX4 receiving the drain voltage (Vd). The photoelectric conversion element PD may be formed in a semiconductor substrate, and may be surrounded by the plurality of gates TG1˜TG4 and CG1˜CG4. Each of the floating diffusion (FD) regions FD1˜FD4 may be located at one side of each of the transfer gates TG1˜TG4 corresponding thereto. In more detail, the floating diffusion (FD) region FD1 may be located at one side of the transfer gate TG1, the floating diffusion (FD) region FD2 may be located at one side of the transfer gate TG2, the floating diffusion (FD) region FD3 may be located at one side of the transfer gate TG3, and the floating diffusion (FD) region FD4 may be located at one side of the transfer gate TG4. Signals corresponding to the amount of photocharges stored in the floating diffusion (FD) regions FD1˜FD4 may be respectively output as tap signals TAP1˜TAP4 corresponding to the floating diffusion (FD) regions FD1˜FD4. In more detail, a signal corresponding to the amount of photocharges stored in the floating diffusion (FD) region FD1 may be output as a tap signal TAP1, a signal corresponding to the amount of photocharges stored in the floating diffusion (FD) region FD2 may be output as a tap signal TAP2, a signal corresponding to the amount of photocharges stored in the floating diffusion (FD) region FD3 may be output as a tap signal TAP3, and a signal corresponding to the amount of photocharges stored in the floating diffusion (FD) region FD4 may be output as a tap signal TAP4. The tap signals TAP1˜TAP4 may be respectively applied to gates of the drive transistors DX1˜DX4 corresponding thereto through conductive lines. In addition, the tap signals TAP1˜TAP4 may be respectively applied to terminals of the reset transistors RX1˜RX4 corresponding thereto through conductive lines. Each of the floating diffusion (FD) regions FD1˜FD4 may include an impurity region that is formed by implanting N-type impurities into a semiconductor substrate to a predetermined depth. The drain nodes D1˜D4 may be respectively located at one sides of the circulation gates CG1˜CG4 corresponding thereto, and may be coupled to the drain voltage (Vd) through conductive lines. Each of the drain nodes D1˜D4 may include an impurity region that is formed by implanting N-type impurities into a semiconductor substrate to a predetermined depth. The transfer gates TG1˜TG4 may be respectively arranged at different positions corresponding to vertex points of a rectangular ring shape surrounding the photoelectric conversion element PD. The circulation gates CG1˜CG4 may be respectively disposed in regions corresponding to four sides of the rectangular ring shape surrounding the photoelectric conversion element PD. During the modulation period, the circulation gates CG1˜CG4 may sequentially and consecutively receive circulation control voltages Vcir1˜Vcir4 in a predetermined direction (for example, a counterclockwise direction), such that the circulation gates CG1˜CG4 may partially generate an electric field in an edge region of the photoelectric conversion element PD and may enable the electric field to be changed along the corresponding direction at intervals of a predetermined time. Photocharges stored in the photoelectric conversion element PD may move from one place to another place in the direction in which the electric field is generated and changed. In this case, each of the circulation control voltages Vcir1˜Vcir4 may have a potential level that is unable to electrically connect the photoelectric conversion element PD to each of the drain nodes D1˜D4. Thus, during the modulation period, the circulation gates CG1˜CG4 may not turn on the circulation transistors CX1˜CX4 corresponding thereto, and may perform only the role of moving photocharges of the photoelectric conversion element PD. During the readout period, each of the circulation gates CG1˜CG4 may fix a voltage level of the photoelectric conversion element PD to the drain voltage (Vd) by the draining control voltage (Vdrain), such that the circulation gates CG1˜CG4 can prevent noise from flowing into the photoelectric conversion element PD, resulting in no signal distortion. For example, when the draining control voltage (Vdrain) is activated to a logic high level, each of the circulation gates (CG1˜CG4) may have a high potential that can electrically connect the photoelectric conversion element PD to each of the drain nodes D1˜D4. Thus, the activated draining control voltage (Vdrain) may have a higher voltage than each of the activated circulation control voltages Vcir1˜Vcir4. Accordingly, during the readout period, the draining control voltage (Vdrain) may be activated to a logic high level. In this case, since each of the drain nodes D1˜D4 is electrically coupled to the photoelectric conversion element PD, the photoelectric conversion element PD may be fixed to a high drain voltage (Vd), such that residual photocharges in the photoelectric conversion element PD can be drained. The circulation gate CG1 may receive the circulation control signal CXV1 that corresponds to either the circulation control voltage (Vcir1) or the draining control voltage (Vdrain) based on the switching operation of the switching element S1 corresponding to the circulation gate CG1. The circulation gate CG2 may receive the circulation control signal CXV2 that corresponds to either the circulation control voltage (Vcir2) or the draining control voltage (Vdrain) based on the switching operation of the switching element S2 corresponding to the circulation gate CG2. The circulation gate CG3 may receive the circulation control signal CXV3 that corresponds to either the circulation control voltage (Vcir3) or the draining control voltage (Vdrain) based on the switching operation of the switching element S3 corresponding to the circulation gate CG3. The circulation gate CG4 may receive the circulation control signal CXV4 that corresponds to either the circulation control voltage (Vcir4) or the draining control voltage (Vdrain) based on the switching operation of the switching element S4 corresponding to the circulation gate CG4. In more detail, during the modulation period, the circulation gates CG1˜CG4 may respectively receive the circulation control voltages Vcir1˜Vcir4. During the readout period, each of the circulation gates CG1˜CG4 may receive the draining control voltage (Vdrain). Although the switching elements S1˜S4 may be included in the pixel driver130, other implementations are also possible. The transfer gates TG1˜TG4 and the circulation gates CG1˜CG4 may be spaced apart from each other by a predetermined distance while being arranged alternately with each other over the semiconductor substrate. When viewed in a plane, the transfer gates TG1˜TG4 and the circulation gates CG1˜CG4 may be arranged in a ring shape that surrounds the photoelectric conversion element PD. The circulation gates CG1 and CG3 may be respectively arranged at both sides of the photoelectric conversion element PD in a first direction with respect to the photoelectric conversion element PD at an upper portion of the semiconductor substrate. The circulation gates CG2 and CG4 may be respectively arranged at both sides of the photoelectric conversion element PD in a second direction with respect to the photoelectric conversion element PD. For example, the circulation gates CG1˜CG4 may be respectively disposed in regions corresponding to four sides of the rectangular ring shape surrounding the photoelectric conversion element PD. In this case, the circulation gates CG1˜CG4 may be arranged to partially overlap with the photoelectric conversion element PD On the other hand, each of the transfer gates TG1˜TG4 may be spaced apart from two contiguous or adjacent circulation gates by a predetermined distance, and may be disposed between the two contiguous or adjacent circulation gates. For example, the transfer gates TG1˜TG4 may be disposed in regions corresponding to vertex points of the rectangular ring shape, and may be arranged to partially overlap with the photoelectric conversion element PD. FIG.7illustrates moves of photocharges by the circulation gates CG1˜CG4 in the indirect pixel shown inFIG.6based on some implementations of the disclosed technology. Referring toFIG.7, when the circulation control voltages Vcir1˜Vcir4 are respectively applied to the circulation gates CG1˜CG4, the electric field may be formed in a peripheral region of the circulation gates CG1˜CG4, such that photocharges generated by the photoelectric conversion element PD may move from the edge region of the photoelectric conversion element PD to another region contiguous or adjacent to the circulation gates CG1˜CG4. In this case, when the potential of each of the circulation control voltages Vcir1˜Vcir4 is less than a predetermined potential that can create a channel capable of electrically coupling the photoelectric conversion element PD to each of the drain nodes D1˜D4, photocharges can be accumulated or collected in the peripheral region of the circulation gates CG1˜CG4 without moving to the drain nodes D1˜D4. However, as can be seen fromFIG.6, the circulation gates CG1˜CG4 are disposed to surround the upper portion of the photoelectric conversion element PD. The circulation control voltages Vcir1˜Vcir4 are not applied simultaneously, but are sequentially and consecutively applied to the circulation gates CG1˜CG4 in a predetermined direction (for example, a counterclockwise direction), and thus photocharges may move along the edge region of the photoelectric conversion element PD according to a desired sequence of operations of the circulation gates CG1˜CG4. As such, photocharges can move in a predetermined direction along the edge region of the photoelectric conversion element PD. In some implementations, at a first point in time, the circulation control signal (Vcir1) is applied to the circulation gate CG1 and thus the electric field is formed in the peripheral region of the circulation gate CG1. In this case, photocharges generated by the photoelectric conversion element PD can be accumulated near the circulation gate CG1 by the electric field. After a predetermined time period, at a second point in time, the circulation control signal (Vcir2) is applied to the circulation gate CG2 contiguous or adjacent to the circulation gate CG1, and the circulation control signal (Vcir1) ceases to be applied to the circulation gate CG1. Thus, photocharges accumulated near the circulation gate CG1 may move toward the circulation gate CG2. Thus, photocharges may move from the circulation gate CG1 to the circulation gate CG2. After a predetermined time period, at a third point in time, the circulation control signal (Vcir3) is applied to the circulation gate CG3 contiguous or adjacent to the circulation gate CG2, and the circulation control signal (Vcir2) ceases to be applied to the circulation gate CG2. Thus, photocharges accumulated near the circulation gate CG2 may move toward the circulation gate CG3. Thus, photocharges may move from the circulation gate CG2 to the circulation gate CG3. After a predetermined time period, at a fourth point in time, the circulation control signal (Vcir4) is applied to the circulation gate CG4 contiguous or adjacent to the circulation gate CG3, and the circulation control signal (Vcir3) ceases to be applied to the circulation gate CG3. Thus, photocharges accumulated near the circulation gate CG3 may move toward the circulation gate CG4. Thus, photocharges may move from the circulation gate CG3 to the circulation gate CG4. After a predetermined time period, at a fifth point in time, the circulation control signal (Vcir1) is applied to the circulation gate CG1 contiguous or adjacent to the circulation gate CG4, and the circulation control signal (Vcir4) ceases to be applied to the circulation gate CG4. Thus, photocharges accumulated near the circulation gate CG4 may move toward the circulation gate CG1. Thus, photocharges may move from the circulation gate CG4 to the circulation gate CG1. If the above-mentioned operations are consecutively and repeatedly carried out, photocharges can be circulated along the edge region of the photoelectric conversion element (PD). FIG.8is a conceptual diagram illustrating how photocharges are moving toward a floating diffusion (FD) region by transfer gates in the indirect pixel shown inFIG.6based on some implementations of the disclosed technology.FIG.8illustrates how the indirect pixel shown inFIG.6transfers photocharges to the floating diffusion (FD) region by transfer gates. Referring toFIG.8, in some implementations, when the transfer control signals TFv1˜TFv4 are respectively applied to the transfer gates TG1˜TG4, an electrical channel is created in the semiconductor substrate below the transfer gates TG1˜TG4 to couple the photoelectric conversion element (PD) to the floating diffusion (FD) regions FD1˜FD4. The photocharges generated by the photoelectric conversion element (PD) can be transferred to the floating diffusion (FD) regions FD1˜FD4 through the channel. The transfer control signals TFv1˜TFv4 are not applied simultaneously, but are sequentially and consecutively applied to the transfer gates TG1˜TG4 in a predetermined direction (for example, a counterclockwise direction). The transfer control signals TFv1˜TFv4 may be sequentially applied to the transfer gates TG1˜TG4 according to a desired sequence of operations of the circulation gates CG1˜CG4 shown inFIG.7. For example, in a situation in which photocharges accumulated near the circulation gate CG1, by activation of the circulation gate CG1, move toward the circulation gate CG2, the transfer control signal (TFv1) can be applied only to the transfer gate TG1 located between the circulation gates CG1 and CG2. In this case, the transfer control signal (TFv1) may have a higher voltage than each of the circulation control voltages Vcir1 and Vcir2. As described above, in the arrangement structure in which the transfer gate TG1 and the circulation gates CG1 and CG2 are arranged in an L-shape structure, in a situation in which the transfer gate TG1 is located at a vertex position and at the same time the signal (TFv1) applied to the transfer gate TG1 is at a higher voltage level than each of the signals Vcir1 and Vcir2 applied to the circulation gates CG1 and CG2, most parts of photocharges collected by the circulation gates CG1 and CG2 and the transfer gate TG1 may be intensively collected in the region located close to the transfer gate TG1. That is, most parts of the collected photocharges may be concentrated in a narrow region. Therefore, even when the transfer gate TG1 having a relatively small size is used, photocharges can be rapidly transferred to the floating diffusion (FD) region FD1. In the same manner as described above, in a situation in which photocharges accumulated near the circulation gate CG2 move toward the circulation gate CG3, the transfer control signal (TFv2) can be applied only to the transfer gate TG2 located between the circulation gates CG2 and CG3. In addition, if photocharges accumulated near the circulation gate CG3 move toward the circulation gate CG4, the transfer control signal (TFv3) can be applied only to the transfer gate TG3 located between the circulation gates CG3 and CG4. Likewise, if photocharges accumulated near the circulation gate CG4 move toward the circulation gate CG1, the transfer control signal (TFv4) can be applied only to the transfer gate TG4 located between the circulation gates CG4 and CG1. FIG.9is a timing diagram illustrating an example of operations of the image sensing device100based on some implementations of the disclosed technology. Referring toFIG.9, the operation period of the image sensing device100may be broadly classified into a modulation period and a readout period. The modulation period may refer to a time period in which the light source10emits light to a target object1under control of the light source driver170and senses light reflected from the target object1using the direct TOF method or the indirect TOF method. The readout period may refer to a time period in which the pixel signal generation circuits PGC1˜PGC4 of the indirect pixel (IPX) may respectively read the tap signals TAP1˜TAP4 corresponding to the amount of photocharges accumulated in the floating diffusion (FD) regions FD1˜FD4 during the modulation section, may generate indirect pixel output signals IPXout1˜IPXout4 based on the read tap signals TAP1˜TAP4, and may thus generate digital data corresponding to the indirect pixel output signals IPXout1˜IPXout4. In this case, a direct pixel output signal (DPXout) of the direct pixel (DPX) and digital data corresponding to the direct pixel output signal (DPXout) may be immediately generated as soon as the direct pixel (DPX) senses light, such that the direct pixel output signal (DPXout) and the digital data corresponding thereto can be transferred to the image signal processor200in real time. Thus, the readout period may refer to a time period in which the indirect pixel output signals IPXout1˜IPXout4 of the indirect pixel (IPX) and digital data corresponding thereto are generated and transferred. If the readout enable signal (ROUTen) is deactivated to a logic low level at a time point (t1), the modulation period may start operation. If the modulation period starts operation, the image sensing device100may operate in the object monitoring mode by default, and may generate digital data indicating the distance to the target object using the direct TOF method. In more detail, a direct TOF enable signal (dToFen) may be activated to a logic high level at the time point (t1). The readout enable signal (ROUTen), the direct TOF enable signal (dToFen), and an indirect TOF enable signal (iToFen) to be described later may be generated by the image signal processor200, and may thus be transferred to the image sensing device100. The image sensing device100may repeatedly emit pulse light synchronized with the clock signal (MLS) to the target object1at intervals of a predetermined time (for example, t1˜t2 or t2˜t3). The pulse light may be denoted by “LIGHT” as shown inFIG.9. In addition,FIG.9illustrates an event signal (EVENT) acting as the direct pixel output signal (DPXout) that is generated when light emitted from the image sensing device100is sensed after being reflected from the target object1. In other words, the event signal (EVENT) may refer to the direct pixel output signal (DPXout) that is generated by sensing light reflected from the target object1. On the other hand,FIG.9illustrates a signal (DARK) acting as a direct pixel output signal (DPXout) that is generated when a dark noise component (e.g., ambient noise) irrelevant to light emitted from the image sensing device100is sensed and generated. That is, the signal (DARK) may refer to the direct pixel output signal (DPXout) that is generated by sensing the dark noise component instead of light reflected from the target object1. Light emitted from the image sensing device100at the time points t1 and t2 may be reflected by the target object1, and the reflected light may be sensed, such that the signal (EVENT) may be generated. However, a distance corresponding to a time delay between the signal (LIGHT) and the signal (EVENT) may exceed a threshold distance, and the counted resultant value stored in the mode counter of the image signal processor200may not increase. On the other hand, the signal (DARK) may occur due to the dark noise component in a time period t2˜t3. The distance corresponding to a time delay between the signal (LIGHT) and the signal (DARK) may be equal to or less than a threshold distance, and the counted resultant value stored in the mode counter may increase. However, since the counted resultant value does not exceed a mode switching value, switching of the operation mode of the image sensing device100may not occur. Light emitted from the image sensing device100at each of the time points t4, t5, and t6 may be sensed after being reflected from the target object1, such that the signal (EVENT) may occur. The distance corresponding to the time delay between the signal (LIGHT) and the signal (EVENT) may be equal to or less than a threshold distance, and the counted resultant value stored in the mode counter may increase. Meanwhile, in a time period t4˜t7, the signal (DARK) may occur twice due to the dark noise component. The distance corresponding to the time delay between the signal (LIGHT) and the signal (DARK) may exceed or be longer than the threshold distance, and the counted resultant value stored in the mode counter may not increase. However, assuming that the counted resultant value does not exceed the mode switching value at the time point (t7), switching of the operation mode of the image sensing device100may not occur. That is, if each of the threshold distance, the mode switching value, and the initialization time is set to an appropriate value, erroneous increase of the counted resultant value or erroneous switching of the operation mode may be prevented by the signal DARK. Although each of the threshold distance, the mode switching value, and the initialization time can be experimentally determined in advance, the scope or spirit of the disclosed technology is not limited thereto, and other implementations are also possible. In some implementations, the image signal processor200may also dynamically change at least one of the threshold distance, the mode switching value, and the initialization value according to external conditions (e.g., illuminance outside the photographing device, speed of the photographing device, a user request, etc.). Light emitted from the image sensing device100at a time point (t8) may be sensed after being reflected from the target object1, such that the signal (EVENT) may occur. The distance between the time delay between the signal (LIGHT) and the signal (EVENT) may be equal to or less than a threshold distance, and the counted resultant value stored in the mode counter may increase. Assuming that the counted resultant value exceeds or is higher than the mode switching value at the time point (t8), the image signal processor200may allow the operation mode of the image sensing device100to switch from the object monitoring mode to the depth resolving mode. Therefore, at a time point (t9), the direct TOF enable signal (dToFen) may be deactivated to a logic low level, and the indirect TOF enable signal (iToFen) may be activated to a logic high level. Accordingly, the image sensing device100may generate digital data indicating the distance to the target object1using the indirect TOF method. During the depth resolving mode after the time point (t9), the image sensing device100may repeatedly emit a modulated light synchronized with the clock signal (MLS) to the target object1at intervals of a predetermined time (for example, t10˜t15). In the modulation period, the drain voltage (Vd) applied to each of the drain nodes D1˜D4 may be at a low-voltage (e.g., a ground voltage) level. In the readout period, the drain voltage (Vd) applied to each of the drain nodes D1˜D4 may be at a high-voltage (e.g., a power-supply voltage) level. For example, if the drain voltage (Vd) is at a high-voltage level even in the modulation period, the drain voltage (Vd) may prevent photocharges collected by the circulation gates from moving toward the transfer gate. Therefore the drain voltage (Vd) may be maintained at a low-level level in the modulation period. At a time point (t9) where the depth resolving mode is started, the circulation control voltage (Vcir1) may be activated. That is, the circulation control voltage (Vcir1) may be applied to the circulation gate CG1 at the time point (t9). In this case, the circulation control voltage (Vcir1) may have a potential level that is unable to electrically connect the photoelectric conversion element PD to the drain node D1. The circulation control voltage (Vcir1) may be activated during a time period t9˜t11. Since the activated circulation control voltage (Vcir1) is applied to the circulation gate CG1, the electric field may be formed in a region that is contiguous or adjacent to the circulation gate CG1 in the edge region of the photoelectric conversion element PD. As a result, photocharges generated by photoelectric conversion of reflected light in the photoelectric conversion element (PD) may move toward the circulation gate CG1 by the electric field, such that the resultant photocharges are accumulated near or collected in the circulation gate CG1. At a time point (t10), the transfer control signal (TFv1) and the circulation control voltage (Vcir2) may be activated. For example, in the situation in which the circulation control signal (Vcir1) is still activated, if the circulation control signal (Vcir2) is applied to the circulation gate CG2 and at the same time the transfer control signal (TFv1) is applied to the transfer gate TG1, the circulation gates CG1 and CG2 and the transfer gate TG1 can operate at the same time. In this case, the transfer control signal (TFv1) may have a higher voltage than each of the circulation control voltages Vcir1 and Vcir2. The transfer control signal (TFv1) may be activated during a time period t10˜t11, and the circulation control voltage (Vcir2) may be activated during a time period t10˜t12. Therefore, photocharges collected near the circulation gate CG1 during the time period t10˜t11 may move toward the transfer gate TG1. In addition, photocharges additionally collected by the transfer gate TG1 and the circulation gates CG1 and CG2 during the time period t11˜t12 may also move toward the transfer gate TG1. Whereas the circulation gates CG1 and CG2 and the transfer gate TG1 are arranged in an L-shape structure, the transfer gate TG1 is arranged at a vertex position and a relatively higher potential is applied to the transfer gate TG1, such that photocharges can be intensively collected in the region (i.e., the vertex region) located close to the transfer gate TG1. The collected photocharges can be transferred to the floating diffusion (FD) region FD1 by the transfer gate TG1. Thus, photocharges are intensively collected in a narrow vertex region, such that photocharges can be rapidly transferred to the floating diffusion (FD) region FD1 using a small-sized transfer gate TG1. At the time point (t11), the circulation control signal (Vcir1) and the transfer control signal (TFv1) may be deactivated, and the transfer control signal (TFv2) and the circulation control signal (Vcir3) may be activated. Thus, the transfer gate TG1 and the circulation gate CG1 that are located at one side of the circulation gate CG2 may stop operation, and the transfer gate TG2 and the circulation gate CG3 that are located at the other side of the circulation gate CG2 may start operation. In this case, the activated transfer control signal (TFv2) may have a higher voltage than the circulation control voltage (Vcir3). However, although the transfer control signal (TFv2) and the circulation control signal (Vcir3) are activated, a predetermined time (i.e., a rising time) may be consumed until the potential levels of the transfer control signal (TFv2) and the circulation control voltage (Vcir3) reach a predetermined level at which the gates TG2 and CG3 can actually operate. Thus, there may occur a time period in which the transfer gate TG1 stops operation and the transfer gate TG2 is not yet operated. Therefore, the circulation control signal (Vcir2) is continuously activated until reaching the time point (t12). As a result, during a predetermined time in which the transfer gate TG2 is not yet operated, photocharges may not be dispersed and move toward the circulation gate CG2. For example, not only photocharges not transferred by the transfer gate TG1, but also newly generated photocharges may move toward the circulation gate CG2. If the rising time of each of the transfer control signal (TFv2) and the circulation control voltage (Vcir3) has expired, the transfer gate TG2 may operate by the transfer control signal (TFv2) and the circulation gate CG3 may operate by the circulation control signal (Vcir3). Thus, the circulation gates CG2 and CG3 and the transfer gate TG2 may operate at the same time. In this case, since the transfer control signal (TFv2) has a higher voltage than each of the circulation control voltages Vcir2 and Vcir3, photocharges may move toward the transfer gate TG2 and may flow into the floating diffusion (FD) region FD2 by the transfer gate TG2. At the time point (t12), the circulation control signal (Vcir2) and the transfer control signal (TFv2) may be deactivated, and the transfer control signal (TFv3) and the circulation control signal (Vcir4) may be activated. Thus, the transfer gate TG2 and the circulation gate CG2 that are located at one side of the circulation gate CG3 may stop operation, and the transfer gate TG3 and the circulation gate CG4 that are located at the other side of the circulation gate CG3 may start operation. In this case, the transfer control signal (TFv3) may have a higher voltage than the circulation control voltage (Vcir4). In this case, the circulation control voltage (Vcir3) is continuously activated until reaching the time point (t13). As a result, during a predetermined time in which the transfer gate TG3 is not yet operated, photocharges may not be dispersed and move toward the circulation gate CG3. If the rising time of each of the transfer control signal (TFv3) and the circulation control voltage (Vcir4) has expired, the transfer gate TG3 may operate by the transfer control signal (TFv3) and the circulation gate CG4 may operate by the circulation control voltage (Vcir4). Thus, the circulation gates CG3 and CG4 and the transfer gate TG3 may operate at the same time. In this case, since the transfer control signal (TFv3) has a higher voltage than each of the circulation control voltages Vcir3 and Vcir4, photocharges may move toward the transfer gate TG3 and may flow into the floating diffusion (FD) region FD3 by the transfer gate TG3. At a time point (t13), the circulation control signal (Vcir3) and the transfer control signal (TFv3) may be deactivated, and the transfer control signal (TFv4) may be activated. In this case, the activated transfer control signal (TFv4) may have a higher voltage than the circulation control voltage (Vcir4), and the circulation control signal (Vcir4) may remain activated until reaching the time point (t14). Therefore, photocharges may move toward the circulation gate CG4. Thereafter, if the rising time of the transfer control signal (TFv4) has expired, photocharges may flow into the floating diffusion (FD) region FD4 by the transfer gate TG4. The time period t9˜t14 may be defined as a first indirect cycle. Until the modulation period is ended, the operation of moving photocharges and the operation of sequentially transferring the moved photocharges to the floating diffusion (FD) regions FD1˜FD4 may be repeatedly performed in the same manner as in the time period t9˜t14. As can be seen fromFIG.9, the operation corresponding to the first indirect cycle from among the second to m-th indirect cycles (where ‘m’ is an integer of 3 or more) may be repeatedly performed. As a result, although photoelectric conversion sensitivity of the photoelectric conversion element PD is at a low level or transmission (Tx) efficiency of the transfer gates TG1˜TG4 is at a low level, the accuracy of sensing the distance to the target object using the indirect TOF method can be increased or improved. Information about how many times the first indirect cycle is repeated may be experimentally determined in advance in consideration of photoelectric conversion sensitivity of the photoelectric conversion element PD or transmission (Tx) efficiency of the transfer gates TG1˜TG4. In some other implementations, the first indirect cycle may not be repeated, and the readout period may be started as soon as the first indirect cycle is ended. If the modulation period has expired, the readout enable signal (ROUTen) is activated such that the readout period may be started. In this case, the drain voltage (Vd) may be activated to a high-voltage level, and the draining control signal (Vdrain) may also be activated to a high-voltage level. Therefore, the photoelectric conversion element PD may be electrically coupled to the drain nodes D1˜D4 by the circulation gates CG1˜CG4, such that the voltage level of the photoelectric conversion element PD may be fixed to the drain voltage (Vd) during the readout period. FIG.10is a schematic diagram illustrating an example of some constituent elements included in the image sensing device shown inFIG.1based on some implementations of the disclosed technology. Referring toFIG.10, the image sensing device1000may illustrate one example of some constituent elements included in the image sensing device100shown inFIG.1. The image sensing device1000may include a pixel array1005, a row driver1050, a modulation driver1060, a horizontal time-to-digital circuit (TDC)1070, a vertical TDC1080, and an indirect readout circuit1090. The pixel array1005may correspond to the pixel array110shown inFIG.1, and may include a plurality of direct pixels1010and a plurality of indirect pixels1040. Although the pixel array1005shown inFIG.10based on some implementations of the disclosed technology may include a plurality of pixels arranged in a matrix shape including desired numbers of rows and columns, e.g., 22 rows and 22 columns. In implementations, the number of rows and the number of columns included in the pixel array1005may be set as needed. Since the number of rows and the number of columns are determined based on the indirect pixel1040, the direct pixel different in size from the indirect pixel1040may be arranged across two rows and two columns. The plurality of direct pixels1010may be included in a first direct pixel group1020and/or a second direct pixel group1030. Although each direct pixel1010may be four times larger than each indirect pixel1040, the scope or spirit of the disclosed technology is not limited thereto, and other implementations are also possible. This is because the quenching circuit (QC) or the recharging circuit (RC) included in the direct pixel1010may be relatively large in size. In some other implementations, the ratio in size between the direct pixel1010and the indirect pixel1040may be set to a desired ratio for a specific design, for example, “1”, “½”, “ 1/16”, or other ratios. The first direct pixel group1020may include a plurality of direct pixels1010arranged in a line in a first diagonal direction of the pixel array1005. For example, the first diagonal direction may refer to a straight direction by which a first crossing point where the first row and the first column of the pixel array1005cross each other is connected to a second crossing point where the last row and the last column of the pixel array1005cross each other. The second direct pixel group1030may include a plurality of direct pixels1010arranged in a line in a second diagonal direction of the pixel array1005. For example, the second diagonal direction may refer to a straight direction by which a first crossing point where the first row and the last column of the pixel array1005cross each other is connected to a second crossing point where the last row and the first column of the pixel array1005cross each other. A central pixel disposed at a crossing point of the first direct pixel group1020and the second direct pixel group1030may be included in each of the first direct pixel group1020and the second direct pixel group1030. The direct pixels1010may be arranged in a line sensor shape within the pixel array1005, such that the entire region including the direct pixels1010arranged in the line sensor shape may be smaller in size than the region including the indirect pixels1040. This is because the direct pixels1010are designed to have a relatively longer effective measurement distance and a relatively higher temporal resolution rather than a purpose of acquiring an accurate depth image. As a result, the direct pixels1010can recognize the presence or absence of the target object1in the object monitoring mode using the relatively longer effective measurement distance and the relatively higher temporal resolution, and at the same time can correctly measure the distance to the target object1using the relatively longer effective measurement distance and the relatively higher temporal resolution. Meanwhile, when viewed from depth images respectively generated by the indirect pixels1040, each of the direct pixels1010may act as a dead pixel. In this case, the image signal processor200may perform interpolation of the depth images respectively corresponding to positions of the direct pixels1010, by means of digital data of the indirect pixels1040that are located adjacent to the direct pixels1010within the range of a predetermined distance (e.g., two pixels) or less. In the rectangular pixel array1005, the plurality of indirect pixels1040may be arranged in a matrix shape within the remaining regions other than the region provided with the plurality of direct pixels1010. The row driver1050and the modulation driver1060may correspond to the indirect pixel driver130shown inFIG.1. The row driver1050may be arranged in a vertical direction (or a column direction) of the pixel array1005, and the modulation driver1060may be arranged in a horizontal direction (or a row direction) of the pixel array1005. The row driver1050may provide the reset control signals RG1˜RG4 and the selection control signals SEL1˜SEL4 to each of the indirect pixels1040. The reset control signals RG1˜RG4 and the selection control signals SEL1˜SEL4 may be supplied through a signal line extending in a horizontal direction, such that the plurality of indirect pixels1040belonging to the same row of the pixel array1005may receive the same reset control signals RG1˜RG4 and the same selection control signals SEL1˜SEL4. The modulation driver1060may provide the circulation control signals CXV1˜CXV4 and the transfer control signals TFv1˜TFv4 to each of the indirect pixels1040. The circulation control signals CXV1˜CXV4 and the transfer control signals TFv1˜TFv4 may be supplied through a signal line extending in a vertical direction, such that the plurality of indirect pixels1040belonging to the same column of the pixel array1005may receive the same circulation control signals CXV1˜CXV4 and the same transfer control signals TFv1˜TFv4. Although not shown inFIG.10, if at least one of the quenching circuit (QC) and the recharging circuit (RC) in each of the direct pixels1010is implemented as an active device, the direct pixel driver for supplying the quenching control signal (QCS) and/or the recharging control signal may be further disposed. A method for supplying signals by the direct pixel driver may correspond to that of the row driver1050. The horizontal TDC1070and the vertical TDC1080may correspond to the direct readout circuit140shown inFIG.1. The horizontal TDC1070may be arranged in a horizontal direction (or a row direction) at an upper side (or a lower side) of the pixel array1005. The vertical TDC1080may be arranged in a vertical direction (or a column direction) at a right side (or a left side) of the pixel array1005. The horizontal TDC1070may be coupled to each direct pixel1010included in the first direct pixel group1020. The horizontal TDC1070may include a plurality of TDCs (i.e., TDC circuits) that correspond to the direct pixels1010of the first direct pixel group1020on a one to one basis. The vertical TDC1080may be coupled to each direct pixel1010included in the second direct pixel group1030. The vertical TDC1080may include a plurality of TDCs (i.e., TDC circuits) that correspond to the direct pixels1010of the second pixel group1030on a one to one basis. Each TDC included in either the horizontal TDC1070or the vertical TDC1080may include a digital logic circuit configured to generate digital data by calculating a time delay between a pulse signal of the corresponding direct pixel DPX and a reference pulse, and an output buffer configured to store the generated digital data therein. The point of each direct pixel1010shown inFIG.10may refer to a terminal for electrical connection to either the horizontal TDC1070or the vertical TDC1080. The central pixel may include two points, such that the two points may be respectively coupled to the horizontal TDC1070and the vertical TDC1080. In the image sensing device1000based on some implementations of the disclosed technology, each TDC circuit may not be disposed in the direct pixel1010, and may be disposed at one side of the pixel array1005without being disposed in the pixel array1005, such that the region of each direct pixel1010can be greatly reduced in size. Accordingly, the direct pixels1010and the indirect pixels1040may be simultaneously disposed in the pixel array1005, and many more direct pixels1010can be disposed in the pixel array1005, such that higher resolution may be obtained when the distance to the target object is sensed by the direct TOF method. The indirect readout circuit1090may correspond to the indirect readout circuit150shown inFIG.1, may process analog pixel signals generated from the indirect pixels1040, may generate and store digital data corresponding to the processed pixel signals. The indirect pixels1040belonging to the same column of the pixel array1005may output pixel signals through the same signal line. Therefore, in order to normally transfer such pixel signals, the indirect pixels1040may sequentially output the pixel signals on a row basis. FIG.11is a conceptual diagram illustrating an example of operations of the image sensing device1000shown inFIG.10based on some implementations of the disclosed technology. Referring toFIGS.10and11, when the image sensing device1000operates in each of the object monitoring mode and the depth resolving mode, information about how pixels are activated according to lapse of time are illustrated. In this case, activation of such pixels may refer to an operation state in which each pixel receives a control signal from the corresponding pixel driver120or130, generates a signal (e.g., a pulse signal or a pixel signal) formed by detection of incident light, and transmits the generated signal to the corresponding readout circuit140or150. InFIG.11, the activated pixels may be represented by shaded pixels. In the object monitoring mode in which the image sensing device1000generates digital data indicating the distance to the target object using the direct TOF method, the image sensing device1000may operate sequentially in units of a direct cycle (or on a direct-cycle basis). As can be seen fromFIG.11, the image sensing device1000may sequentially operate in the order of first to twelfth direct cycles DC1˜DC12. Each of the first to twelfth direct cycles DC1˜DC12 may refer to a time period in which a series of operations including, for example, an operation of emitting pulse light to the target object1, an operation of generating a pulse signal corresponding to reflected light received from the target object1, an operation of generating digital data corresponding to the pulse signal, a quenching operation, and a recharging operation, can be performed. For example, the time period t1˜t2 or t2˜t3 shown inFIG.9may correspond to a single direct cycle. In the first direct cycle DC1, the direct pixels1010included in the first direct pixel group1020may be activated, and the direct pixels1010included in the second direct pixel group1030may be deactivated. The horizontal TDC1070for processing the pulse signal of the first direct pixel group1020may be activated, and the vertical TDC1080for processing the pulse signal of the second direct pixel group1030may be deactivated. In addition, the indirect pixels1040, and the constituent elements1050,1060, and1090for controlling and reading out the indirect pixels1040may be deactivated. In the second direct cycle DC2, the direct pixels1010included in the first direct pixel group1020may be deactivated, and the direct pixels1010included in the second direct pixel group1030may be activated. In addition, the horizontal TDC1070for processing the pulse signal of the first direct pixel group1020may be deactivated, and the vertical TDC1080for processing the pulse signal of the second direct pixel group1030may be activated. In addition, the indirect pixels1040, and the constituent elements1050,1060, and1090for controlling and reading out the indirect pixels1040may be deactivated. Not only in the third to twelfth direct cycles DC3˜DC12, but also in subsequent direct cycles, the direct pixels1010included in the first direct pixel group1020and the direct pixels included in the second direct pixel group1030may be alternately activated in the same manner as in the first direct cycle DC1 and the second direct cycle DC2. Therefore, the horizontal TDC1070and the vertical TDC1080may also be activated alternately with each other. Therefore, a minimum number of the direct pixels having relatively large power consumption may be included in the pixel array1005, and only some of the direct pixels may be activated within one direct cycle, such that power consumption can be optimized. In addition, pixels to be activated in the pixel array1005may be changed from pixels of the first direct pixel group1020to pixels of the second direct pixel group1030or may be changed from pixels of the second direct pixel group1030to pixels of the first direct pixel group1020, such that effects similar to those of a light beam of a radar system configured to rotate by 360° can be obtained. Although the above-mentioned embodiment of the disclosed technology has disclosed that the first direct pixel group1020is first activated for convenience of description, the scope or spirit of the disclosed technology is not limited thereto, and the second direct pixel group1030according to another embodiment can be activated first as necessary. In addition, although the above-mentioned embodiment of the disclosed technology has disclosed that the entire direct cycle can extend to at least the twelfth direct cycle DC12 for convenience of description, the scope or spirit of the disclosed technology is not limited thereto. If the predetermined condition described in step S30shown inFIG.3is satisfied in any other steps prior to reaching the twelfth direct cycle DC12, the operation mode of the image sensing device1000may switch from the object monitoring mode to the depth resolving mode. If the operation mode of the image sensing device1000switches from the object monitoring mode to the depth resolving mode, the indirect cycle (IC) may be started. In the indirect cycle (IC), the indirect pixels1040and the constituent elements1050,1060, and1090for controlling and reading out the indirect pixels1040may be activated. In the indirect cycle (IC), the indirect pixels1040can be activated at the same time. In addition, the direct pixels1010and the constituent elements1070and1080for controlling and reading out the direct pixels1010may be deactivated. FIG.12is a conceptual diagram illustrating another example of operations of the image sensing device100shown inFIG.1based on some implementations of the disclosed technology. The image sensing device1200shown inFIG.12may illustrate another example of some constituent elements included in the image sensing device100shown inFIG.1. The image sensing device1200may include a pixel array1205, a row driver1250, a modulation driver1260, a horizontal TDC1270, a vertical TDC1280, and an indirect readout circuit1290. The remaining components of the image sensing device1200other than some structures different from those of the image sensing device1000may be substantially similar in structure and function to those of the image sensing device1000shown inFIG.10, and as such redundant description thereof will herein be omitted for brevity. For convenience of description, the image sensing device1200shown inFIG.12will hereinafter be described centering upon characteristics different from those of the image sensing device1000shown inFIG.10. The pixel array1205may further include a third direct pixel group1225and a fourth direct pixel group1235, each of which includes a plurality of direct pixels1210. The entire region and detailed operations of the direct pixels1210included in each of the third and fourth direct pixel groups1225and1235may be substantially identical to those of the direct pixels1210. The third direct pixel group1225may include a plurality of direct pixels1210arranged in a line in a horizontal direction (or a row direction) of the pixel array1205. The fourth direct pixel group1235may include a plurality of direct pixels1210arranged in a line in a vertical direction (or a column direction) of the pixel array1205. The first direct pixel group1220and the second direct pixel group1230may be defined as a first set. The third direct pixel group1225and the fourth direct pixel group1235may be defined as a second set. A central pixel disposed at a crossing point of the first to fourth direct pixels groups1220,1225,1230, and1235may be included in each of the first to fourth direct pixel groups1220,1225,1230, and1235. On the other hand, the horizontal TDC1270may be coupled to each direct pixel1210included in the first direct pixel group1220and each direct pixel1210included in the third direct pixel group1225. Each direct pixel1210of the first direct pixel group1220and each direct pixel1210of the third direct pixel group1225, that are arranged in a line in the column direction, may be coupled to the same signal line, and the horizontal TDC1270may include a plurality of TDC circuits each corresponding to a set of two direct pixels1210. The vertical TDC1280may be coupled to each direct pixel1210included in the second direct pixel group1230and each direct pixel1210included in the fourth direct pixel group1235. Each direct pixel1210of the second direct pixel group1230and each direct pixel1210of the fourth direct pixel group1235, that are arranged in a line in the column direction, may be coupled to the same signal line, and the vertical TDC1280may include a plurality of TDC circuits each corresponding to a set of two direct pixels1210. FIG.13is a conceptual diagram illustrating an example of operations of the image sensing device shown inFIG.12based on some implementations of the disclosed technology. Referring toFIGS.12and13, when the image sensing device1200operates in each of the object monitoring mode and the depth resolving mode, information about how pixels are activated according to lapse of time are illustrated. In this case, activation of such pixels may refer to an operation state in which each pixel receives a control signal from the corresponding pixel driver120or130, generates a signal (e.g., a pulse signal or a pixel signal) acquired by detection of incident light, and transmits the generated signal to the corresponding readout circuit140or150. InFIG.13, the activated pixels may be represented by shaded pixels. In the object monitoring mode in which the image sensing device1200generates digital data indicating the distance to the target object using the direct TOF method, the image sensing device1200may operate sequentially in units of a direct cycle (or on a direct-cycle basis). As can be seen fromFIG.13, the image sensing device1200may sequentially operate in the order of first to twelfth direct cycles DC1˜DC12. Each of the first to twelfth direct cycles DC1˜DC12 may refer to a time period in which a series of operations including, for example, an operation of emitting pulse light to the target object1, an operation of generating a pulse signal corresponding to reflected light received from the target object1, an operation of generating digital data corresponding to the pulse signal, the quenching operation, and the recharging operation, can be performed. For example, the time period t1˜t2 or t2˜t3 shown inFIG.9may correspond to a single direct cycle. In the first direct cycle DC1, the direct pixels1210included in each of the first direct pixel group1220and the second direct pixel group1230that correspond to the first set may be activated, and the direct pixels1210included in each of the third direct pixel group1225and the fourth direct pixel group1235that correspond to the second set may be deactivated. The horizontal TDC1270for processing the pulse signal of the first direct pixel group1220and the vertical TDC1280for processing the pulse signal of the second direct pixel group1230may be activated. In addition, the indirect pixels1240, and the constituent elements1250,1260, and1290for controlling and reading out the indirect pixels1240may be deactivated. In the second direct cycle DC2, the direct pixels1210included in each of the first direct pixel group1220and the second direct pixel group1230that correspond to the first set may be deactivated, and the direct pixels1210included in each of the third direct pixel group1225and the fourth direct pixel group1235that correspond to the second set may be activated. The horizontal TDC1270for processing the pulse signal of the third direct pixel group1225and the vertical TDC1280for processing the pulse signal of the fourth direct pixel group1235may be activated at the same time. In addition, the indirect pixels1240, and the constituent elements1250,1260, and1290for controlling and reading out the indirect pixels1240may be deactivated. Not only in the third to twelfth direct cycles DC3˜DC12, but also in subsequent direct cycles, the direct pixels1210included in the first and second direct pixel groups1220and1230and the direct pixels included in the third and fourth direct pixel groups1225and1235may be alternately activated in the same manner as in the first direct cycle DC1 and the second direct cycle DC2. Therefore, a minimum number of the direct pixels having relatively larger power consumption may be included in the pixel array1205, and only some of the direct pixels may be activated within one direct cycle, such that the amount of power consumption can be optimized. In addition, pixels to be activated in the pixel array1205may be changed from the direct pixels1210(i.e., the first and second direct pixel groups1220and1230) arranged in the diagonal direction to the direct pixels1210(i.e., the third and fourth direct pixel groups1225and1235) arranged in the horizontal and vertical directions, or may be changed from the direct pixels1210arranged in the horizontal and vertical directions to the direct pixels1210arranged in the diagonal direction, such that effects similar to those of a light beam of a radar system can be obtained. Although the above-mentioned embodiment of the disclosed technology has disclosed that the first and second direct pixel groups1220and1230are first activated for convenience of description, the scope or spirit of the disclosed technology is not limited thereto, and the third and fourth direct pixel groups1225and1235according to another embodiment can be activated first as necessary. AlthoughFIG.13has disclosed that two direct pixel groups are simultaneously activated in each direct cycle, it should be noted that only one direct pixel group may be activated in each direct cycle based on some other implementations of the disclosed technology. For example, the first direct pixel group1220, the fourth direct pixel group1235, the second direct pixel group1230, and the third direct pixel group1225may be sequentially activated clockwise, such that effects similar to those of a light beam of a radar system can be obtained. In addition, although the above-mentioned embodiment of the disclosed technology has disclosed that the entire direct cycle can extend to at least the twelfth direct cycle DC12 for convenience of description, the scope or spirit of the disclosed technology is not limited thereto, and other implementations are also possible. For example, if the predetermined condition described in step S30shown inFIG.3is satisfied in any other steps prior to reaching the twelfth direct cycle DC12, the operation mode of the image sensing device1200may switch from the object monitoring mode to the depth resolving mode. If the operation mode of the images sensing device1200switches from the object monitoring mode to the depth resolving mode, the indirect cycle (IC) may be started. In the indirect cycle (IC), the indirect pixels1240and the constituent elements1250,1260, and1290for controlling and reading out the indirect pixels1240may be activated. In addition, the direct pixels1210and the constituent elements1270and1280for controlling and reading out the direct pixels1210may be deactivated. As is apparent from the above description, the image sensing device based on some implementations of the disclosed technology can be equipped with different sensing pixels and associated circuitry for performing TOF measurements based on different TOF measurement techniques with different TOF sensing capabilities so that the device can select an optimum TOF method in response to a distance to a target object, such that the image sensing device can sense the distance to the target object using the optimum TOF method. The embodiments of the disclosed technology may be implemented in various ways to achieve one or more advantages or desired effects. Although a number of illustrative embodiments have been described, it should be understood that numerous modifications or enhancements to the disclosed embodiments and other embodiments can be devised based on what is described and/or illustrated in this patent document.
107,304
11860280
DETAILED DESCRIPTION Reference will now be made in detail to background examples and some embodiments of the invention, examples of which are illustrated in the accompanying drawings. FIG.1is a diagram illustrative of an embodiment of a 3-D LIDAR system100in one exemplary operational scenario. 3-D LIDAR system100includes a lower housing101and an upper housing102that includes a domed shell element103constructed from a material that is transparent to infrared light (e.g., light having a wavelength within the spectral range of 700 to 1,700 nanometers). In one example, domed shell element103is transparent to light having a wavelengths centered at 905 nanometers. As depicted inFIG.1, a plurality of beams of light105are emitted from 3-D LIDAR system100through domed shell element103over an angular range, α, measured from a central axis104. In the embodiment depicted inFIG.1, each beam of light is projected onto a plane defined by the x and y axes at a plurality of different locations spaced apart from one another. For example, beam106is projected onto the xy plane at location107. In the embodiment depicted inFIG.1, 3-D LIDAR system100is configured to scan each of the plurality of beams of light105about central axis104. Each beam of light projected onto the xy plane traces a circular pattern centered about the intersection point of the central axis104and the xy plane. For example, over time, beam106projected onto the xy plane traces out a circular trajectory108centered about central axis104. FIG.2is a diagram illustrative of another embodiment of a 3-D LIDAR system10in one exemplary operational scenario. 3-D LIDAR system10includes a lower housing11and an upper housing12that includes a cylindrical shell element13constructed from a material that is transparent to infrared light (e.g., light having a wavelength within the spectral range of 700 to 1,700 nanometers). in one example, cylindrical shell element13is transparent to light having a wavelengths centered at 905 nanometers. As depicted inFIG.2, a plurality of beams of light15are emitted from 3-D LIDAR system10through cylindrical shell element13over an angular range, β. In the embodiment depicted inFIG.2, the chief ray of each beam of light is illustrated. Each beam of light is projected outward into the surrounding environment in a plurality of different directions. For example, beam16is projected onto location17in the surrounding environment. In some embodiments, each beam of light emitted from system10diverges slightly. In one example, a beam of light emitted from system10illuminates a spot size of 20 centimeters in diameter at a distance of 100 meters from system10. In this manner, each beam of illumination light is a cone of illumination light emitted from system10. In the embodiment depicted inFIG.2, 3-D LIDAR system10is configured to scan each of the plurality of beams of light15about central axis14. For purposes of illustration, beams of light15are illustrated in one angular orientation relative to a non-rotating coordinate frame of 3-D LIDAR system10and beams of light15′ are illustrated in another angular orientation relative to the non-rotating coordinate frame. As the beams of light15rotate about central axis14, each beam of light projected into the surrounding environment (e.g., each cone of illumination light associated with each beam) illuminates a volume of the environment corresponding the cone shaped illumination beam as it is swept around central axis14. FIG.3depicts an exploded view of 3-D LIDAR system100in one exemplary embodiment. 3-D LIDAR system100further includes a light emission/collection engine112that rotates about central axis104. In the embodiment depicted inFIG.3, a central optical axis117of light emission/collection engine112is tilted at an angle, θ, with respect to central axis104. As depicted inFIG.3, 3-D LIDAR system100includes a stationary electronics board110mounted in a fixed position with respect to lower housing101. Rotating electronics board111is disposed above stationary electronics board110and is configured to rotate with respect to stationary electronics board110at a predetermined rotational velocity (e.g., more than 200 revolutions per minute). Electrical power signals and electronic signals are communicated between stationary electronics board110and rotating electronics board111over one or more transformer, capacitive, or optical elements, resulting in a contactless transmission of these signals. Light emission/collection engine112is fixedly positioned with respect to the rotating electronics board111, and thus rotates about central axis104at the predetermined angular velocity, ω. As depicted inFIG.3, light emission/collection engine112includes an array of integrated LIDAR measurement devices113. In one aspect, each integrated LIDAR measurement, device includes a light, emitting element, a light detecting element, and associated control and signal conditioning electronics integrated onto a common substrate (e.g., printed circuit board or other electrical circuit board). Light emitted from each integrated LIDAR measurement device passes through a series of optical elements116that collimate the emitted light to generate a beam of illumination light projected from the 3-D LIDAR system into the environment. In this manner, an array of beams of light105, each emitted from a different LIDAR measurement device are emitted from 3-D LIDAR system100as depicted inFIG.1. In general, any number of LIDAR measurement devices can be arranged to simultaneously emit any number of light beams from 3-D LIDAR system100. Light reflected from an object in the environment due to its illumination by a particular LIDAR measurement device is collected by optical elements116. The collected light passes through optical elements116where it is focused onto the detecting element of the same, particular LIDAR measurement device. In this manner, collected light associated with the illumination of different portions of the environment by illumination generated by different LIDAR measurement devices is separately focused onto the detector of each corresponding LIDAR measurement device. FIG.4depicts a view of optical elements116in greater detail. As depicted inFIG.4, optical elements116include four lens elements116A-D arranged to focus collected light118onto each detector of the array of integrated LIDAR measurement devices113. In the embodiment depicted inFIG.4, light passing through optics116is reflected from mirror124and is directed onto each detector of the array of integrated LIDAR measurement devices113. In some embodiments, one or more of the optical elements116is constructed from one or more materials that absorb light outside of a predetermined wavelength range. The predetermined wavelength range includes the wavelengths of light emitted by the array of integrated LIDAR measurement devices113. In one example, one or more of the lens elements are constructed from a plastic material that includes a colorant additive to absorb light having wavelengths less than infrared light generated by each of the array of integrated LIDAR measurement devices113. In one example, the colorant is Epolight 7276A available from Aako BV (The Netherlands). In general, any number of different colorants can be added to any of the plastic lens elements of optics116to filter out undesired spectra. FIG.5depicts a cutaway view of optics116to illustrate the shaping of each beam of collected light118. A LIDAR system, such as 3-D LIDAR system10depicted inFIG.2, and system100, depicted inFIG.1, includes a plurality of integrated LIDAR measurement devices each emitting a pulsed beam of illumination light from the LIDAR device into the surrounding environment and measuring return light reflected from objects in the surrounding environment. FIG.6depicts an integrated LIDAR measurement device120in one embodiment. Integrated LIDAR measurement device120includes a pulsed light emitting device122, a light detecting element123, associated control and signal conditioning electronics integrated onto a common substrate121(e.g., electrical board), and connector126. Pulsed emitting device122generates pulses of illumination light124and detector123detects collected light125. Integrated LIDAR measurement device120generates digital signals indicative of the distance between the 3-D LIDAR system and an object in the surrounding environment based on a time of flight of light emitted from the integrated LIDAR measurement device120and detected by the integrated LIDAR measurement device120. Integrated LIDAR measurement device120is electrically coupled to the 3-D LIDAR system via connector126. Integrated LIDAR measurement device120receives control signals from the 3-D LIDAR system and communicates measurement results to the 3-D LIDAR system over connector126. FIG.7depicts a schematic view of an integrated LIDAR measurement device130in another embodiment. Integrated LIDAR measurement device130includes a pulsed light emitting device134, a light detecting element138, a beam splitter135(e.g., polarizing beam splitter, non-polarizing beam splitter, dielectric film, etc.), an illumination driver133, signal conditioning electronics139, analog to digital (A/D) conversion electronics140, controller132, and digital input/output I/O) electronics131integrated onto a common substrate144. In some embodiments, these elements are individually mounted to a common substrate (e.g., printed circuit board). In some embodiments, groups of these elements are packaged together and the integrated package is mounted to a common substrate. In general, each of the elements are mounted to a common substrate to create an integrated device, whether they are individually mounted or mounted as part of an integrated package. FIG.8depicts an illustration of the timing associated with the emission of a measurement pulse from an integrated LIDAR measurement device130and capture of the returning measurement pulse. As depicted inFIGS.7and8, the measurement begins with a pulse firing signal146generated by controller132. Due to internal system delay, a pulse index signal149is determined by controller132that is shifted from the pulse firing signal146by a time delay, TD. The time delay includes the known delays associated with emitting light from the LIDAR system (e.g., signal communication delays and latency associated with the switching elements, energy storage elements, and pulsed light emitting device) and known delays associated with collecting light and generating signals indicative of the collected light (e.g., amplifier latency, analog-digital conversion delay, etc.). As depicted inFIG.7and8, a return signal147is detected by the LIDAR system in response to the illumination of a particular location. A measurement window (i.e., a period of time over which collected return signal data is associated with a particular measurement pulse) is initiated by enabling data acquisition from detector138. Controller132controls the timing of the measurement window to correspond with the window of time when a return signal is expected in response to the emission of a measurement pulse sequence. In some examples, the measurement window is enabled at the point in time when the measurement pulse sequence is emitted and is disabled at a time corresponding to the time of flight of light over a distance that is substantially twice the range of the LIDAR system. In this manner, the measurement window is open to collect return light from objects adjacent to the LIDAR system (i.e., negligible time of flight) to objects that are located at the maximum range of the LIDAR system. In this manner, all other light that cannot possibly contribute to useful return signal is rejected. As depicted inFIG.8, return signal147includes two return measurement pulses that correspond with the emitted measurement pulse. In general, signal detection is performed on all detected measurement pulses. Further signal analysis may be performed to identify the closest signal (i.e., first instance of the return measurement pulse), the strongest signal, and the furthest signal (i.e., last instance of the return measurement pulse in the measurement window). Any of these instances may be reported as potentially valid distance measurements by the LIDAR system. For example, a time of flight, TOF1, may be calculated from the closest (i.e., earliest) return measurement pulse that corresponds with the emitted measurement pulse as depicted inFIG.8. In some embodiments, the signal analysis is performed by controller132, entirely. In these embodiments, signals143communicated from integrated LIDAR measurement device130include an indication of the distances determined by controller132. In some embodiments, signals143include the digital signals148generated by A/D converter140. These raw measurement signals are processed further by one or more processors located on board the 3-D LIDAR system, or external to the 3-D LIDAR system to arrive at a measurement of distance. In some embodiments, controller132performs preliminary signal processing steps on signals148and signals143include processed data that is further processed by one or more processors located on board the 3-D LIDAR system, or external to the 3-D LIDAR system to arrive at a measurement of distance. In some embodiments a 3-D LIDAR system includes multiple integrated LIDAR measurement devices, such as the LIDAR systems illustrated inFIGS.1-3. In some embodiments, a delay time is set between the firing of each integrated LIDAR measurement device. Signal142includes an indication of the delay time associated with the firing of integrated LIDAR measurement device130. In some examples, the delay time is greater than the time of flight of the measurement pulse sequence to and from an object located at the maximum range of the LIDAR device. In this manner, there is no cross-talk among any of the integrated LIDAR measurement devices. In some other examples, a measurement pulse is emitted from one integrated LIDAR measurement device before a measurement pulse emitted from another integrated LIDAR measurement device has had time to return to the LIDAR device. In these embodiments, care is taken to ensure that there is sufficient spatial separation between the areas of the surrounding environment interrogated by each beam to avoid cross-talk. Illumination driver133generates a pulse electrical current signal145in response to pulse firing signal146. Pulsed light emitting device134generates pulsed light emission136in response to pulsed electrical current signal145. The illumination light136is focused and projected onto a particular location in the surrounding environment by one or more optical elements of the LIDAR system (not shown). In some embodiments, the pulsed light emitting device is laser based (e.g., laser diode). In some embodiments, the pulsed illumination sources are based on one or more light emitting diodes. In general, any suitable pulsed illumination source may be contemplated. In some embodiments, digital I/O131, timing logic132, A/D conversion electronics140, and signal conditioning electronics139are integrated onto a single, silicon-based microelectronic chip. In another embodiment, these same elements are integrated into a single gallium-nitride or silicon based circuit that also includes the illumination driver. In some embodiments, the A/D conversion electronics and controller132are combined as a time-to-digital converter. As depicted inFIG.7, return light137reflected from the surrounding environment is detected by light detector138. In some embodiments, light detector138is an avalanche photodiode. Light detector138generates an output signal147that is amplified by signal conditioning electronics139. In some embodiments, signal conditioning electronics139includes an analog trans-impedance amplifier. However, in general, the amplification of output signal147may include multiple, amplifier stages. In this sense, an analog trans-impedance amplifier is provided by way of non-limiting example, as many other analog signal amplification schemes may be contemplated within the scope of this patent document. The amplified signal is communicated to A/D converter140. The digital signals are communicated to controller132. Controller132generates an enable/disable signal employed to control the timing of data acquisition by ADC140in concert with pulse firing signal146. As depicted inFIG.7, the illumination light136emitted from integrated LIDAR measurement device130and the return light137directed toward integrated LIDAR measurement device share a common path. In the embodiment depicted inFIG.7, the return light137is separated from the illumination light136by a polarizing beam splitter (PBS)135. PBS135could also be a non-polarizing beam splitter, but this generally would result in an additional loss of light. In this embodiment, the light emitted from pulsed light emitting device134is polarized such that the illumination light passes through PBS135. However, return light137generally includes a mix of polarizations. Thus, PBS135directs a portion of the return light toward detector138and a portion of the return light toward pulsed light emitting device134. In some embodiments, it is desirable to include a quarter waveplate after PBS135. This is advantageous in situations when the polarization of the return light is not significantly changed by its interaction with the environment. Without the quarter waveplate, the majority of the return light would pass through PBS135and be directed toward the pulsed light emitting device134, which is undesirable. However, with the quarter waveplate, the majority of the return light will pass through PBS135and be directed toward detector138. However, in general, when the polarization of the return light is completely mixed and a single PBS is employed as depicted inFIG.7, half of the return light will be directed toward detector138, and the other half will be directed toward pulse light emitting device134, regardless of whether a quarter waveplate is used. FIGS.9-17depict various embodiments to avoid these losses. FIG.9depicts a front view of an embodiment150of an integrated LIDAR measurement device including a detector151(e.g., an avalanche photodiode) having a circular shaped active area152with a diameter, D. In one example, the diameter of the active area152is approximately 300 micrometers. In one aspect, detector151includes a slot153all the way through the detector. In one example, the slot has a height, HS, of approximately 70 micrometers and a width, W, of approximately 200 micrometers. FIG.10depicts a side view of embodiment150depicted inFIG.9. As depicted inFIG.10, embodiment150also includes pulsed light emitting device153fixed to the back of avalanche photodiode detector151and configured to emit illumination light154through slot153in detector151. in one example, pulse light emitting device153include three laser diodes packaged together to create an emission area having a height, HE, of 10 micrometers with a divergence angle of approximately 15 degrees. In this example, the thickness, S, of the detector151is approximately 120 micrometers. In this manner, detector151and pulsed light emitting device153are located in the beam path of light emitted from an integrated LIDAR measurement device and returned to the integrated LIDAR measurement device. Although a certain amount of return light will be directed toward slot153and not detected, the relatively small area of slot153compared to the active area152of detector151ensures that the majority of the return light will be detected. FIG.11depicts a side view of an embodiment160of an integrated LIDAR measurement device including a detector162having an active area163, a pulsed light emitting device161located outside of the active area163, a focusing optic164and an active optical element165. Active optical element165is coupled to a controller of the integrated LIDAR measurement device. The controller communicates control signal167to active element165that causes the active optical element to change states. In a first state, depicted inFIG.11, the active optical element changes its effective index of refraction and causes the light166emitted from pulsed light emitting device161to refract toward optical axis, OA. In a second state, depicted inFIG.12, the active optical element changes its effective index of refraction such that return light168passes through active optical element165and focusing optic164toward the active area163of detector162. During this state, the controller controls pulsed light emitting device161such that it does not emit light. In this embodiment, the light emitted by pulsed light emitting device161is not initially aligned with the optical axis of the optical system. However, during periods of time when light is emitted from the pulsed light emitting device161, active optical element changes its state such that the illumination light is aligned with the optical axis of the optical system. In some embodiments, the active optical element is a phase array. In some embodiments, the active optical element is a acousto-optical modulator. In some embodiments, the active optical element is a surface acoustic wave modulator. In general, many active devices capable of altering their effective index of refraction may be contemplated. FIG.13depicts a side view of an embodiment170of an integrated LIDAR measurement device including a detector173having an active area172, a pulsed light emitting device171located outside of the active area172, concentric focusing optics174and focusing optics175centered along the optical axis of the integrated LIDAR measurement device. As depicted inFIG.13, the return light177is focused onto the active area172of detector173by concentric focusing optics174. In addition, light176emitted from pulsed light emitting device171is refracted toward optical axis, OA, and collimated by focusing optics175. As depicted inFIG.13, focusing optics175occupy a relatively small area immediately centered about the optical axis. Concentric focusing optics are also centered about the optical axis, but are spaced apart from the optical axis. FIG.14depicts a top view of an embodiment180of an integrated LIDAR measurement device including a detector187having an active area183, a pulsed light, emitting device181located outside of the active area183, concentric focusing optics184, and mirror182. As depicted inFIG.14, return light185is focused by focusing optics184and reflects from mirror182toward the active area183of detector182. In one aspect, mirror182includes a slot through which light emitted from pulsed light emitting device181is passed. Illumination light186is emitted from pulsed light emitting device181, passes through the slot in mirror182, is collimated by focusing optics184, and exits the integrated LIDAR measurement device. FIGS.15A-Cdepict three different light paths through an embodiment190of an integrated LIDAR measurement device. This embodiment includes a pulsed light emitting device191, a FBS193, a polarization control element194(e.g., Pockels cell), a PBS195, a quarter waveplate196, mirror element197(e.g., a PBS, a half cube with total internal reflection, etc.), delay element198, polarizing beam combiner199, half waveplate200, and detector192. Polarization control element194is coupled to a controller of the integrated LIDAR measurement device. The controller communicates control signal204to polarization control element194that causes the polarization control element to alter the polarization state of light passing through the polarization control element in accordance with control signal204. In a first state, depicted inFIG.15A, polarization control element194is configured not to change the polarization of light passing through when illumination light201is emitted from pulsed light emitting device191.FIG.15Adepicts the path of illumination light201through embodiment190. Illumination light201passes through PBS193, polarization control element194, PBS195, and quarter waveplate196. In the examples depicted inFIGS.15A-C, the pulsed light emitting device191emits p-polarized light, and the PBS elements193and194are configured to directly transmit p-polarized light. However, in general, different polarizations may be utilized to achieve the same result. In a second state, depicted inFIGS.15B and15C, polarization control element194is configured to change the polarization of light passing through when return light202is detected by detector192, and light is not emitted from pulsed light, emitting device191. FIG.15Bdepicts the path of a portion202A of return light202that is p-polarized after passing through quarter waveplate196. The p-polarized return light passes through PBS195and polarization control element194. In this state, polarization control element194switches the polarization of the return light from p-polarization to s-polarization. The s-polarized return light is reflected from PBS193toward half waveplate200. Half waveplate200switches the polarization again from s-polarization back to p-polarization. Polarizing beam combiner199reflects the p-polarized light toward detector192. FIG.15Cdepicts the path202B of the portion of return light202that is s-polarized after passing through quarter waveplate196. The s-polarized return light is reflected from beam splitter195to mirror element197, through beam delay element198, through polarizing beam combiner199, which directly transmits the s-polarized light onto detector192. Beam delay element198is introduced to balance the optical path lengths of the s and p polarized return light. Beam delay element may be simply a piece of optical glass of appropriate length. Embodiment190also includes a beam path extension element206located in the illumination beam path between the pulsed light emitting device191and polarizing beam splitter193. In some embodiments, beam path extension element206is simply a piece of optical glass of appropriate length. Beam path extension element206is configured to equalize the illumination path length and the length of the return paths202A and202B. Note that the return path lengths202A and202B are equalized by beam delay element198. Since the return paths202A and202B pass through additional elements, their effective optical path is longer. By equalizing the illumination path length with the length of the return paths, the return beam is focused to a spot size that approaches the size of the illumination output aperture. This enables the use of the smallest sized detector with the least amount of noise and sensitivity to sun noise and highest bandwidth. Embodiment190also includes a beam delay element205in return path202B to match the effect of half waveplate200in return path202A. Due to the finite amount of time required to switch the state of the polarization control element, the LIDAR based measurement of relatively short distances is based on light collected by the return path202B depicted inFIG.15C. While the polarization control element is changing state, return light propagating along the path202A depicted inFIG.15Bwill not necessarily be subject to a change in polarization. Thus, this light has a high probability of propagating through PBS193to pulsed light emitting device191, and thus, will not be detected. This situation is acceptable because signal strength is typically not a significant issue for relatively short range measurements. However, for relatively long range measurements, after a sufficient period of time to ensure that the state of the polarization state switching element has changed, return light propagating down both paths described inFIGS.15B and15Cis available for detection and distance estimation. As discussed hereinbefore, quarter waveplate196is desirable. When performing relatively short range measurements, only light passing though the return path202B described inFIG.15Cis available. When the polarization of the return light is completely mixed, half of the light will pass through the path described inFIG.15C. However, when the return light has reflected from a specular target, the polarization remains unchanged. Without introducing the quarter waveplate196, light reflected from specular targets would propagate through the path described inFIG.15B, and would be undetected or significantly weakened for short range measurements when the polarization control element is changing states. FIG.16depicts an embodiment220of an integrated LIDAR measurement device that includes an additional polarization control element221in return path202B. Embodiment220includes like numbered elements described with reference to embodiment190. Polarization control elements194and221effectively control the amount of return light that reaches detector192. As discussed with reference toFIG.15B, if polarization control element194does not change the polarization state of return light202A, the light is directed to pulsed light emitting device191, not detector192. Conversely, if polarization control element194changes the polarization state of return light202A, the light is directed to detector192. Similarly, if polarization control element221changes the polarization state of return light202B from s-polarization to p-polarization, the light is directed away from detector192, and ultimately dumped (i.e., absorbed elsewhere). Conversely, if polarization control element221does not change the polarization state of return light202B, the light is directed toward detector192. Since the degree of polarization change imparted by polarization control elements194and221is variably controlled (e.g., Pockels Cells), it follows that the amount of return light that reaches detector192is controlled by a controller of the integrated LIDAR measurement device (e.g., controller132) via control signals204and222. For example, as discussed hereinbefore, when performing relatively short range measurements, only light passing though the return path202B described inFIG.15CandFIG.16is available for detection as polarizer control element194is transitioned from its state depicted inFIG.15A. During this period of time, there is a risk that detector192saturates. In this scenario, it is desirable to control polarization control element221such that the polarization of a portion of return light202is partially changed from s-polarization to p-polarization and that the p-polarized light component is dumped before it reaches detector192. In general, the timing and profiles of control signals204and222can be tuned to maximize the dynamic range of detector192for different environmental conditions. For example, previously detected signals, signals from other integrated LIDAR measurement devices, images of the surrounding environment, or any combination thereof, could be utilized to adjust the dynamic range of detector192by changing the timing and profiles of control signals204and222during operation of an integrated LIDAR measurement device. In one example, the timing and profiles of control signals204and222are programmed as a function of pulse travel distance. This can be used to avoid detector saturation caused by objects that are close to the sensor. For larger distances measurement sensitivity is maximized and polarization control element221is programmed to pass return light202B without changing its polarization. In this manner, the maximum amount of return light reaches detector192. Multiple profiles could be used depending on illumination pulse power, features detected in the sensed environment from data collected in a previous return, etc. FIG.17depicts an embodiment230of an integrated LIDAR measurement device that includes additional, optional elements that may be added individually, or in any combination, to embodiment190described with reference toFIGS.15A-C. Embodiment230includes like numbered elements described with reference to embodiment190. As depicted inFIG.17, collimating optics231are located in the optical path between pulsed light emitting device191and beam splitter193. Typically, a pulsed light emitting device based on laser diode technology or light emitting diode technology generates a divergent beam of light. By collimating the illumination light emitted from the pulsed light emitting device, a small beam size is maintained throughout the illumination path. This allows the optical elements in the illumination path to remain small. Also, embodiment230includes a focusing lens232after quarter waveplate196. By refocusing the collomated light transported through the integrated LIDAR measurement device, the output aperture of the illuminating device191is re-imaged just outside of the integrated LIDAR measurement device, keeping both the crossection of the integrated LIDAR measurement device and the effective exit and entrance aperture of the integrated measurement device small. This increases possible pixel packaging density and pixel resolution. Since focusing lens232is located in the optical path shared by the illumination light and the return light, and the illumination and return paths are balanced, an image point235is generated at the output of the integrated LIDAR measurement device. This imaging point235is imaged back to both the detector192and the pulsed light emitting device191. Various optical elements such as apertures, field stops, pinhole filters, etc. may be located at image point235to shape and filter the images projected onto detector192. In addition, embodiment230includes a focusing optic233located in the optical path between the detector192and beam combiner199to focus the return light onto detector192. Also, embodiment190includes a spectral filter234located in the return beam path between the focusing optic233and beam combiner199. In some embodiments, spectral filter234is a bandpass filter that passes light in the spectral band of the illumination beam and absorbs light outside of this spectral band. In many embodiments, spectral filters operate most effectively when incident light, is normal to the surface of the spectral filter. Thus, ideally, spectral filter234is located in any location in the return beam path where the light is collimated, or closely collimated. FIG.18depicts a side view of an embodiment210of an integrated LIDAR measurement device including a detector212, a pulsed light emitting device213located in front of detector212within a lens element211.FIG.19depicts a front view of embodiment210. As depicted inFIGS.18-19, return light217is collected and focused by lens element211(e.g., a compound parabolic concentrator) onto detector212. Although the input port218of lens element211is depicted as planar inFIG.18, in general, the input port218may be shaped to focus return light217onto detector212in any suitable manner. Pulsed light emitting device213is located within the envelope of lens element211(e.g., molded within lens element211). Although pulsed light emitting device213blocks a certain amount of return light, its small size relative to the collection area of lens element211mitigates the negative impact. Conductive elements214provide electical connectivity between pulsed light emitting device213and other elements of the integrated LIDAR measurement device (e.g., illumination driver133) via conductive leads215. In some embodiments, conductive elements214also provide structural support to locate pulsed light emitting device213within the envelope of lens element211. FIG.20depicts a side view of an embodiment240of an integrated LIDAR measurement device including a detector242and a pulsed light emitting device241located in front of detector242. As depicted inFIG.20, return light246is collected and focused by focusing optics244onto detector242. Pulsed light emitting device241is located within focusing optics244(e.g., molded with focusing optics244). Although pulsed light emitting device241blocks a certain amount of return light, its small size relative to the collection area of focusing optics244mitigates the negative impact. Conductive elements (not shown) provide electical connectivity between pulsed light emitting device241and other elements of the integrated LIDAR measurement device (e.g., illumination driver133). In some embodiments, the conductive elements also provide structural support to locate pulsed light emitting device241within focusing optics244. FIG.21depicts a side view of an embodiment250of an integrated LIDAR measurement device including a detector253having an active area252and a pulsed light emitting device251located outside the field of view of the active area252of the detector. As depicted inFIG.21, a overmold254is mounted over the detector. The overmold254includes a conical cavity that corresponds with the ray acceptance cone of return light255. In one aspect, illumination light259from illumination source251is injected into the detector reception cone by a fiber waveguide257. An optical coupler256optically couples illumination source251(e.g., array of laser diodes) with fiber waveguide257. At the end of the fiber waveguide257, a mirror element258is oriented at a 45 degree angle with respect to the waveguide to inject the illumination light259into the cone of return light255. In one embodiment, the end faces of fiber waveguide257are cut at a 45 degree angle and the end faces are coated with a highly reflective dielectric coating to provide a mirror surface. In some embodiments, waveguide257includes a rectangular shaped glass core and a polymer cladding of lower index of refraction. In some embodiments, the entire assembly250is encapsulated with a material having an index of refraction that closely matches the index of refraction of the polymer cladding. In this manner, the waveguide injects the illumination light259into the acceptance cone of return light255with minimal occlusion. The placement of the waveguide257within the acceptance cone of the return light projected onto the active sensing area252of detector253is selected to ensure that the illumination spot and the detector field of view have maximum overlap in the far field. In some embodiments, such as the embodiments described with reference toFIG.1andFIG.2, an array of integrated LIDAR measurement devices is mounted to a rotating frame of the LIDAR device. This rotating frame rotates with respect to a base frame of the LIDAR device. However, in general, an array of integrated LIDAR measurement devices may be movable in any suitable manner (e.g., gimbal, pan/tilt, etc.) or fixed with respect to a base frame of the LIDAR device. In some other embodiments, each integrated LIDAR measurement device includes a beam directing element (e.g., a scanning mirror, MEMS mirror etc.) that scans the illumination beam generated by the integrated LIDAR measurement device. In some other embodiments, two or more integrated LIDAR measurement devices each emit a beam of illumination light toward a scanning mirror device (e.g., MEMS mirror) that reflects the beams into the surrounding environment in different directions. FIG.22illustrates a method300of performing LIDAR measurements in at least one novel aspect. Method300is suitable for implementation by a LIDAR system such as LIDAR systems100illustrated inFIG.1and LIDAR system10illustrated inFIG.2of the present invention. In one aspect, it is recognized that data processing blocks of method300may be carried out via a pre-programmed algorithm executed by one or more processors of controller132, or any other general purpose computing system. It is recognized herein that the particular structural aspects of LIDAR system100do not represent limitations and should be interpreted as illustrative only. In block301, a measurement pulse of illumination light is generated by an illumination source mounted to a printed circuit board. In block302, a return pulse of light is detected by a detector mounted to the printed circuit board. The return pulse is an amount of the measurement pulse reflected from a location in a three dimensional environment illuminated by the corresponding measurement pulse. In some embodiments, the measurement pulse of illumination light and the return pulse share a common optical path over a distance within the integrated LIDAR device. In block303, an output signal is generated that is indicative of the detected return pulse. In block304, an amount of electrical power is provided to the illumination source by an illumination driver mounted to the printed circuit board. The provided electrical power causes the illumination source to emit the measurement pulse of illumination light. In block305, the output signal is amplified by an amount of analog signal conditioning electronics mounted to the printed circuit board. In block306, the amplified output signal is converted to a digital signal by an analog to digital converter mounted to the printed circuit board. In block307, a time of flight of the measurement pulse from the LIDAR device to the measured location in the three dimensional environment and back to the LIDAR device is determined based on the digital signal. In one or more exemplary embodiments, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code means in the form of instructions or data structures and that can be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. Although certain specific embodiments are described above for instructional purposes, the teachings of this patent document have general applicability and are not limited to the specific embodiments described above. Accordingly, various modifications, adaptations, and combinations of various features of the described embodiments can be practiced without departing from the scope of the invention as set forth in the claims.
43,430
11860281
Furthermore, APPENDIX A has been enclosed following the Detailed Description. The APPENDIX A comprises an article providing information regarding at least some aspects of the present technology described herein and/or additional aspects of the present technology. The APPENDIX A and the information forming part thereof have been enclosed for reference purposes and are to be deleted from the application prior to the publication of the application as a patent. Furthermore, APPENDIX B has been enclosed following the Detailed Description. The APPENDIX B comprises a poster providing information regarding at least some aspects of the present technology described herein and/or additional aspects of the present technology. The APPENDIX B and the information forming part thereof have been enclosed for reference purposes and are to be deleted from the application prior to the publication of the application as a patent. DETAILED DESCRIPTION The examples and conditional language recited herein are principally intended to aid the reader in understanding the principles of the present technology and not to limit its scope to such specifically recited examples and conditions. It will be appreciated that those skilled in the art may devise various arrangements which, although not explicitly described or shown herein, nonetheless embody the principles of the present technology and are included within its spirit and scope. Furthermore, as an aid to understanding, the following description may describe relatively simplified implementations of the present technology. As persons skilled in the art would understand, various implementations of the present technology may be of a greater complexity. In some cases, what are believed to be helpful examples of modifications to the present technology may also be set forth. This is done merely as an aid to understanding, and, again, not to define the scope or set forth the bounds of the present technology. These modifications are not an exhaustive list, and a person skilled in the art may make other modifications while nonetheless remaining within the scope of the present technology. Further, where no examples of modifications have been set forth, it should not be interpreted that no modifications are possible and/or that what is described is the sole manner of implementing that element of the present technology. Moreover, all statements herein reciting principles, aspects, and implementations of the technology, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof, whether they are currently known or developed in the future. Thus, for example, it will be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative circuitry embodying the principles of the present technology. Similarly, it will be appreciated that any flowcharts, flow diagrams, state transition diagrams, pseudo-code, and the like represent various processes which may be substantially represented in computer-readable media and so executed by a computer or processor, whether or not such computer or processor is explicitly shown. The functions of the various elements shown in the figures, including any functional block labeled as a “processor”, may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared. Moreover, explicit use of the term “processor” or “controller” should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (DSP) hardware, network processor, application specific integrated circuit (ASIC), field programmable gate array (FPGA), read-only memory (ROM) for storing software, random access memory (RAM), and non-volatile storage. Other hardware, conventional and/or custom, may also be included. Software modules, or simply modules which are implied to be software, may be represented herein as any combination of flowchart elements or other elements indicating performance of process steps and/or textual description. Such modules may be executed by hardware that is expressly or implicitly shown. With these fundamentals in place, we will now consider some non-limiting examples to illustrate various implementations of aspects of the present technology. Referring initially toFIG.1, there is shown a computer system100suitable for use with some implementations of the present technology, the computer system100comprising various hardware components including one or more single or multi-core processors collectively represented by processor110, a solid-state drive120, a memory130, which may be a random-access memory or any other type of memory. Communication between the various components of the computer system100may be enabled by one or more internal and/or external buses (not shown) (e.g. a PCI bus, universal serial bus, IEEE 1394 “Firewire” bus, SCSI bus, Serial-ATA bus, etc.), to which the various hardware components are electronically coupled. According to embodiments of the present technology, the solid-state drive120stores program instructions suitable for being loaded into the memory130and executed by the processor110for determining a presence of an object. For example, the program instructions may be part of a vehicle control application executable by the processor110. It is noted that the computer system100may have additional and/or optional components, such as a network communication module140for communication, via a communication network (for example, a communication network245depicted inFIG.2) with other electronic devices and/or servers, localization modules (not depicted), and the like. FIG.2illustrates a networked computer environment200suitable for use with some embodiments of the systems and/or methods of the present technology. The networked computer environment200comprises an electronic device210associated with a vehicle220, or associated with a user (not depicted) who can operate the vehicle220, a server235in communication with the electronic device210via a communication network245(e.g. the Internet or the like, as will be described in greater detail herein below). Optionally, the networked computer environment200can also include a GPS satellite (not depicted) transmitting and/or receiving a GPS signal to/from the electronic device210. It will be understood that the present technology is not limited to GPS and may employ a positioning technology other than GPS. It should be noted that the GPS satellite can be omitted altogether. The vehicle220to which the electronic device210is associated may comprise any leisure or transportation vehicle such as a private or commercial car, truck, motorbike or the like. The vehicle may be user operated or a driver-less vehicle. It should be noted that specific parameters of the vehicle220are not limiting, these specific parameters including: vehicle manufacturer, vehicle model, vehicle year of manufacture, vehicle weight, vehicle dimensions, vehicle weight distribution, vehicle surface area, vehicle height, drive train type (e.g. 2× or 4×), tyre type, brake system, fuel system, mileage, vehicle identification number, and engine size. The implementation of the electronic device210is not particularly limited, but as an example, the electronic device210may be implemented as a vehicle engine control unit, a vehicle CPU, a vehicle navigation device (e.g. TomTom™ vehicle navigation device, Garmin™ vehicle navigation device), a tablet, a personal computer built into the vehicle220and the like. Thus, it should be noted that the electronic device210may or may not be permanently associated with the vehicle220. Additionally or alternatively, the electronic device210can be implemented in a wireless communication device such as a mobile telephone (e.g. a smart-phone or a radio-phone). In certain embodiments, the electronic device210has a display270. The electronic device210may comprise some or all of the components of the computer system100depicted inFIG.1. In certain embodiments, the electronic device210is on-board computer device and comprises the processor110, the solid-state drive120and the memory130. In other words, the electronic device210comprises hardware and/or software and/or firmware, or a combination thereof, for determining the presence of an object around the vehicle220, as will be described in greater detail below. In accordance with the non-limiting embodiments of the present technology, the electronic device210further comprises or has access to a plurality of sensors230. According to these embodiments, the plurality of sensors230may comprise sensors allowing for various implementations of the present technology. Examples of the plurality of sensors include but are not limited to: cameras, LIDAR systems, and RADAR systems, etc. Each of the plurality of sensors230is operatively coupled to the processor110for transmitting the so-captured information to the processor110for processing thereof, as will be described in greater detail herein below. Each or some of the plurality of sensors230can be mounted on an interior, upper portion of a windshield of the vehicle220, but other locations are within the scope of the present disclosure, including on a back window, side windows, front hood, rooftop, front grill, or front bumper of the vehicle220. In some non-limiting embodiments of the present technology, each or some of the plurality of sensors230can be mounted in a dedicated enclosure (not depicted) mounted on the top of the vehicle220. Further, the spatial placement of each or some of the plurality of sensors230can be designed taking into account the specific technical configuration thereof, configuration of the enclosure, weather conditions of the area where the vehicle220is to be used (such as frequent rain, snow, and other elements) or the like. In some non-limiting embodiments of the present technology, the plurality of sensors comprises at least a first sensor240and a second sensor260. In these embodiments, both the first sensor240and the second sensor260can be configured to capture a 3D point cloud data of the surrounding area250of the vehicle220. In this regard, each of the first sensor240and the second sensor260may comprise a LIDAR instrument. LIDAR stands for LIght Detection and Ranging. It is expected that a person skilled in the art will understand the functionality of the LIDAR instrument, but briefly speaking, a transmitter (not depicted) of one of the first sensor240and the second sensor260implemented as the LIDAR sends out a laser pulse and the light particles (photons) are scattered back to a receiver (not depicted) of one of the first sensor240and the second sensor260implemented as the LIDAR instrument. The photons that come back to the receiver are collected with a telescope and counted as a function of time. Using the speed of light (˜3×108m/s), the processor110can then calculate how far the photons have traveled (in the round trip). Photons can be scattered back off of many different entities surrounding the vehicle220, such as other particles (aerosols or molecules) in the atmosphere, other cars, stationary objects or potential obstructions in front of the vehicle220. In a specific non-limiting example, each one of the first sensor240and the second sensor260can be implemented as the LIDAR-based systems that may be of the type available from Velodyne LiDAR, Inc. of 5521 Hellyer Avenue, San Jose, CA 95138, the United States of America. It should be expressly understood that the first sensor240and the second sensor260can be implemented in any other suitable equipment. However, in the non-limiting embodiments of the present technology, the first sensor240and the second sensor260do not have to be implemented based on the same LIDAR-based sensor, as such, respective technical characteristics of the first sensor240may differ from those of the second sensor260. In some embodiments of the present technology, the first sensor240and the second sensor260can be housed in the above-mentioned enclosure (not separately depicted) located on the roof of the vehicle220. Further, in the non-limiting embodiments of the present technology, the plurality of sensors230may comprise more than LIDAR-based sensors, such as three or any other suitable number. In these embodiments, all LIDAR-based sensors, along with the first sensor240and the second sensor260, can be housed in the above-mentioned enclosure (not separately depicted). In the non-limiting embodiments of the present technology, the first sensor240and the second sensor260are calibrated such that for a first 3D point cloud captured by the first sensor240and a second 3D point cloud captured by the second sensor260, the processor110is configured to identify overlapping regions by merging the first 3D point cloud and the second 3D point cloud. This calibration can be executed during the manufacturing and/or set up of the vehicle220. Or at any suitable time thereafter or, in other words, the calibration can be executed during retrofitting the vehicle220with the first sensor240and the second sensor260in accordance with the non-limiting embodiments of the present technology contemplated herein. Alternatively, the calibration can be executed during equipping the vehicle220with the first sensor240and the second sensor260in accordance with the non-limiting embodiments of the present technology contemplated herein. In some non-limiting embodiments of the present technology, each of the first sensor240and the second sensor260may be respective rotational LIDAR systems, or sometimes referred to as “spinning LIDAR systems”, each operating with its pre-determined scanning frequency. Accordingly, in these non-limiting embodiments, the processor110may be configured to synchronize, for example, the first sensor240with the second sensor260by adjusting the associated scanning frequencies such that, at a given moment in time, the first sensor240and the second sensor260are at a same angular position relative to their respective vertical central axes. By so doing, the processor110is configured to cause the first sensor240and the second sensor260to capture 3D data indicative of one and the same scene of the surrounding area250of the vehicle220. In some non-limiting embodiments of the present technology, the synchronization of the first sensor240and the second sensor260may be initialized, by the processor110, inter alia, during maintenance periods of the vehicle220; at moments of starting the vehicle220; or during the operation of the vehicle220with a certain periodicity. It is also contemplated that the plurality of sensors230may further comprise other sensors (not depicted), such as cameras, radars, Inertial Measurement Unit (IMU) sensors, and the like. In some non-limiting embodiments of the present technology, the communication network245is the Internet. In alternative non-limiting embodiments, the communication network245can be implemented as any suitable local area network (LAN), wide area network (WAN), a private communication network or the like. It should be expressly understood that implementations for the communication network are for illustration purposes only. A communication link (not separately numbered) between the electronic device210and the communication network245is implemented will depend, inter alia, on how the electronic device210is implemented. Merely as an example and not as a limitation, in those embodiments of the present technology where the electronic device210is implemented as a wireless communication device such as a smartphone or a navigation device, the communication link can be implemented as a wireless communication link. Examples of wireless communication links include, but are not limited to, a 3G communication network link, a 4G communication network link, and the like. The communication network245may also use a wireless connection with the server235. In some embodiments of the present technology, the server235is implemented as a conventional computer server and may comprise some or all of the components of the computer system100ofFIG.1. In one non-limiting example, the server235is implemented as a Dell™ PowerEdge™ Server running the Microsoft™ Windows Server™ operating system, but can also be implemented in any other suitable hardware, software, and/or firmware, or a combination thereof. In the depicted non-limiting embodiments of the present technology, the server235is a single server. In alternative non-limiting embodiments of the present technology (not depicted), the functionality of the server235may be distributed and may be implemented via multiple servers. In some non-limiting embodiments of the present technology, the processor110of the electronic device210can be in communication with the server235to receive one or more updates. The updates can be, but are not limited to, software updates, map updates, routes updates, weather updates, and the like. In some embodiments of the present technology, the processor110can also be configured to transmit to the server235certain operational data, such as routes travelled, traffic data, performance data, and the like. Some or all data transmitted between the vehicle220and the server235may be encrypted and/or anonymized. In the description provided herein, when certain processes and method steps are executed by the processor110of the electronic device210, it should be expressly understood that such processes and method steps can be executed solely by the processor110, in a shared manner (i.e. distributed) between the processor110and the server235, or solely by the server235. In other words, when the present description refers to the processor110or the electronic device210executing certain processes or method steps, it is to expressly cover processes or steps executed by the processor110, by the server235, or jointly executed by the processor110and the server235. With reference toFIG.3, there is depicted a process300(also referred to as a LIDAR data acquisition procedure300), executed by the electronic device210for generating LIDAR data310captured by a rotational LIDAR system of the vehicle220(e.g., the first sensor240). In some non-limiting embodiments of the present technology, the process300can be executed in a continuous manner. In other embodiments of the present technology, the process300can be implemented at pre-determined intervals, such as every 2 milliseconds or any other suitable time interval. It should be noted that as the vehicle220is travelling on a road302, the electronic device210is configured to acquire LIDAR data310from the rotational LIDAR system and which is representative of objects in the surrounding area250of the vehicle220. Broadly speaking, the LIDAR data310is acquired by the electronic device210in a form of a plurality of captured 3D point clouds (not numbered) including: a first captured 3D point cloud312, a second captured 3D point cloud332, an nthcaptured 3D point cloud332, and so forth. It should be noted that a given captured 3D point cloud comprises a large number of data points registered by the rotational LIDAR system during a respective scan of the surrounding area250. To better illustrate this, let's take the example of the first captured 3D point cloud312acquired by the electronic device210at a moment in time t1. The first captured 3D point cloud312comprises a large number of data points captured by the rotational LIDAR system during a first scan thereby of the surrounding area250. For example, the first captured 3D point cloud312may include 30 000 captured data points, 50 000 captured data points, 150 000 captured data points, or the like, which are representative of objects in the surrounding area250. One of the captured data points of the first captured 3D point cloud312is a captured data point314. Data indicative of the captured data point314may include spatial coordinates of the captured data point314in 3D space as determined/captured by the rotational LIDAR system. It is contemplated that additional data may be associated with the captured data point314. For instance, one or more additional parameters such as, for example, distance, intensity, and/or angle, as well as other parameters, may be associated with the captured data point314, as known in the art. It is contemplated that the rotational LIDAR system may captured a respective captured 3D point cloud at a respective time step while the vehicle220is travelling. Such time step may in some cases correspond to an interval of time that is necessary for the rotational LIDAR system to perform a scan of the surrounding area250. Such time step may also correspond to an interval of time that is necessary for the rotational LIDAR system to perform a full rotation about an azimuthal axis thereof. For example, once the rotational LIDAR system performs the first scan of the surroundings, the rotational LIDAR system may provide to the electronic device210, at the moment t1, the first captured 3D point cloud312. In the same example, once the rotational LIDAR system performs the second scan of the surroundings, the rotational LIDAR system may provide to the electronic device210, at the moment t2, the captured 3D point cloud322. In the same example, once the rotational LIDAR system performs the nthscan of the surroundings, the rotational LIDAR system may provide to the electronic device210, at the moment tn, the nthcaptured 3D point cloud332. Overall, it can be said that, as the vehicle220is travelling on the road302, the electronic device210may be configured to acquire from the rotational LIDAR system of the vehicle220the LIDAR data310. The LIDAR data310may comprise inter alia the plurality of 3D point clouds that has been captured by the rotational LIDAR system during respective scans of the surrounding area250. In some embodiments of the present technology, the electronic device210may be configured to merge the plurality of captured 3D point clouds. For example, by merging the plurality of registered 3D point clouds, the electronic device210may be configured to generate a 3D map representation of the surrounding area250of the vehicle220. Such 3D map representation of the surrounding area250may be employed by the electronic device210for controlling operation of the vehicle220when travelling on the road302. To that end, the electronic device210may be configured to perform one or more 3D point cloud merging techniques. In at least some embodiments of the present technology, the electronic device210may be configured to merge a given pair of captured 3D point clouds from the LIDAR data310by executing an Iterative Closest Point (ICP) algorithm. Broadly speaking, the ICP algorithm is configured to minimize the distance between a pair of 3D point clouds. Usually, during execution of the ICP algorithm, one 3D point cloud is referred to as a “target” or “reference” 3D point cloud, while the other 3D point cloud is referred to as a “source” 3D point cloud. The goal of the ICP algorithm is to keep the target 3D point cloud fixed and to identify a “transformation rule” that, when applied onto the source 3D point cloud, would result in the pair of 3D point clouds being merged with each other. Typically, the ICP algorithm includes the following steps executed iteratively:initializing the merging process by performing an “initial guess” of how the pair of registered 3D clouds are to be located in a same 3D spaceselecting and/or filtering the registered 3D points clouds, for determining which data points are to be used for a next step of the ICP algorithm;matching the selected/filtered data points, thereby determining correspondences between data points from the target 3D point cloud and data points form the source 3D point cloud;assigning an error metric; andminimizing the error metric by applying one of transformation rules. It is contemplated that in some cases, the electronic device210may be configured to perform weighting of some matched pairs of data points and/or reject other matched pairs of data points, prior to minimizing the error metric. It should be noted that the electronic device210may be configured to estimate one or more the transformation rules that may be used for evaluating an error metric at a respective iteration of the ICP algorithm. Hence, it can be said that, by iteratively applying estimated transformation rules to minimize the error metric between the corresponding pairs of data points from the pair of 3D point clouds, the electronic device210causes the source 3D point cloud to go through a plurality of “intermediate positions” to a “final position” at which the selected and matched data point pairs are substantially merged. Some examples of known methods of performing the ICP algorithm are described in the article entitled “Efficient Variants of the ICP Algorithm,” written by Szymon Rusinkiewicz and Marc Levoy, and published by Stanford University; the content of which is hereby incorporated by reference in its entirety. It should be noted that (i) selecting/filtering data points and (ii) matching corresponding data points have an important effect on the efficiency of the ICP algorithm performed by the electronic device210for merging the pair of 3D point clouds. Modified Selection Step In a first broad aspect of the present technology, the developers of the present technology have devised methods and devices for improving the selection/filtration step of data points during the ICP algorithm. Put another way, in at least some embodiments of the present technology, the electronic device210is configured to perform a modified ICP algorithm during which a particular type of data point filtration is used for ameliorating the efficiency of the merging procedure between a pair of 3D point clouds. First of all, data point filtration may allow the electronic device210to perform the merging of 3D point clouds faster. It should be noted that conventional LIDAR systems may provide a given captured 3D point cloud having a large number of captured data points, as mentioned above (e.g., between 30 000 and 150 000 registered data points). As such, taking into account each and every one of these captured data points during execution of the ICP algorithm can hinder the computation performance of the electronic device210. Second of all, it should be noted that the electronic device210may be configured to estimate and use normal vectors, or simply “normals”, associated with respective captured data points for performing one or more steps of the ICP algorithm. However, normals for at least some captured data points may be miscalculated (erroneously estimated) for a variety of reasons. For example, a given normal may be miscalculated due to an improper selection of neighbouring captured data points—that is, a given normal for a given captured data point may be estimated based on neighbouring captured data points that are not located on the same surface as the given captured data point. Such miscalculation typically occurs when the given captured data point is located near a boundary of a surface on which it is located. As a result, captured data points having erroneously estimated normals ought to be filtered out from following steps of the ICP algorithm since such erroneously estimated normals may negatively affect the accuracy of the transformation estimation step of the ICP algorithm. However, developers of the present technology have realized that, in addition to an improper selection of neighbouring captured data points during estimation of a given normal of a given registered data point, other reasons may cause the electronic device210to erroneously estimate the given normal of the given captured data point. For example, the developers of the present technology have realized that the electronic device210may erroneously estimate the given normal of the given captured data point due to an erroneous capturing of the given captured data point by the LIDAR system. More specifically, the developers of the present technology have realized that the electronic device210may erroneously estimate the given normal of the given captured data point due to an erroneous measurement of the spatial coordinates of the given captured data point by the LIDAR system. Put another way, the developers of the present technology have realized that spatial coordinates of a given captured data point are subject to a “measurement error” attributed to the LIDAR system itself during capturing. Hence, if the given captured data point is associated with erroneous spatial coordinates, the electronic device210is likely to erroneously estimate the normal for that given captured data point. As a result, the developers of the present technology have devised a particular type of computer-implemented data point filter to be used during the filtration step of the ICP algorithm and which accounts for the uncertainty in spatial coordinates of captured data points by treating this uncertainty as Gaussian random variables. In the context of the present specification, this computer-implemented data point filter will be referred to as a “Normal Covariance Filter” (NCF). Developers of the present technology have realized that employing the NCF during the filtration step of the ICP algorithm allows (i) reduce a number of captured data points in some or each 3D point cloud that are to be merged, which allows accelerating execution of the ICP algorithm, and (ii) filtering out captured data points with erroneously registered spatial coordinates causing respective miscalculated normals, which allows reducing negative effects on the precision of the transformation estimation step of the ICP algorithm. How the electronic device210is configured to implement the NCF and how the electronic device210is configured to filter a given captured 3D point cloud via the NCF will now be described in greater details. Filtering a Registered 3D Point Cloud by Using the NCF How the electronic device210is configured to filter a given captured 3D point cloud for determining a respective filtered 3D point cloud will now be discussed in greater detail with reference to a single given captured data point from the given captured 3D point cloud. With reference toFIG.4, there is depicted a filtration process400for a given captured data point401of a given captured 3D point cloud. However, it should be noted that the electronic device210may be configured to perform the filtration process400described herein below for respective captured data points from the given captured 3D point cloud for determining whether or not it is to be included in the respective filtered 3D point cloud. As depicted inFIG.4, the electronic device210may be configured to identify a set of captured data points402that are sampled from the given captured 3D point cloud such that they are in proximity to the given captured data point401. In some embodiments of the present technology, the electronic device210may use data indicative of the set of captured data points402during execution an SVD algorithm of the modified filtration step of the ICP algorithm. Broadly speaking, the electronic device210may be configured to input into the SVD algorithm data representative of the set of captured data points402. For example, it is contemplated that the electronic device210may be configured to input into the SVD algorithm a matrix D where: D∈R{k×3}(1) where k is a number of captured data points in the set of captured data points402. As such, the matrix D inputted into the SVD algorithm represents the spatial coordinates of a k number of captured data points in the set of captured data points402. For greater clarity, it should be noted that a notation such as Di,jwill refer to an entry i,j from the matrix D, and a notation Diwill refer to a column i of the matrix D when counting from zero. By inputting the matrix D into the SVD algorithm, the electronic device210may receive, in response thereto, an output from the SVD algorithm such as: D=USVwhereU∈R{k×3},S∈R{3×3},V∈R{3×3}(2) where U, S, and V are matrix components of the SVD algorithm, and where the matrices U and V are orthogonal matrices representing a collection of basis vectors, and where S represents, in a sense, “influence” of data points in the space represented by Vivectors from the matrix V. Additional details regarding matrices U, S, and V are provided in the APPENDICES A and B. It should be noted that in cases where entries of matrix D are random variables, V2may also include random variables. Developers of the present technology contemplate that the entries of the matrix D may be Gaussian random variables. Further, it should be noted that V2from the matrix V corresponds to a normal vector406of an estimated plane404, as illustrated inFIG.4. Also, V0and V1from the matrix V represent arbitrary vectors constrained to form a left-handed coordinate system V0, V1, and V2. Also, the developers of the present technology contemplate that S2represents a measure of non-planarity of the estimated plane404. It is contemplated that the electronic device210may be configured to estimate the covariance of the normal vector V2(e.g., plane normal) by propagation of the uncertainty technique. Broadly speaking, propagation of uncertainty (or propagation of error) refers to the effect of uncertainty of variables on the uncertainty of a function that is based thereon. When the variables are the values of experimental measurements, for example as in this case the spatial coordinates of captured data points, they have uncertainties due to measurement limitations (e.g., instrument-related measurement error of the LIDAR system) which propagate due to the combination of variables in the respective function. Thus, it can be said that the electronic device210may be configured to estimate the covariance of the normal vector V2(the normal vector406) by propagation of the uncertainty technique via the following: Cov[V2]=JT⁢Cov[D]⁢J⁢where⁢J=∂f⁡(D)∂D⁢(D)(3) where Cov[D] is the covariance of the matrix D and is computed as follows: Cov[D]=ξ·I{3×3}(4) and where the function ƒ in the Equation (3) is a sequence of computations that the electronic device210may be configured to perform in order to yield V2from the matrix D, where I is an identity matrix, and where the measurement error of the LIDAR system is approximated by a sphere with a standard deviation ξ. It should be noted that the measurement error of the LIDAR system being approximated via a sphere with a standard deviation means that spatial coordinates of a given captured data point may be located within a respective sphere with a standard deviation ξ. It is contemplated that the value of may be obtained from a manufacturer of the LIDAR system. It should be noted however that covariance of V2(i.e., Cov[V2]), lies in the system of coordinates of the LIDAR system. Therefore, the electronic device210may be configured to perform an alignment step so that Cov[V2] is in alignment with V2, such that: Cov[V2]=QCQ−1orC=Q−1Cov[V2]Q(5) where matrix C is a covariance of V2in a normal-aligned coordinate system (e.g., coordinate system defined by V0, V1, and V2), and where Q and Q−1are matrices used for performing the alignment, or in other words, matrices indicative of a transformation between an original coordinate system (e.g., the coordinate system of the LIDAR system) and the normal-aligned coordinate system. The electronic device210may be configured to use a given element of the matrix C for performing a decision-making process on whether the given captured data point401is to be filtered out/excluded from the filtered 3D point cloud. More specifically, the electronic device may be configured to compare element C2,2from the matrix C to determine whether the given captured data point401is to be filtered out. To that end, the electronic device210may be configured to compare C2,2against a threshold value cT: C2,2≥CT(6) where cTis a pre-determined threshold, and where C2,2is the element of the matrix C. It should be noted that C2,2represents a distribution of potential normal vectors, for the given captured data point401, projected on an axis aligned with V2. It should be noted that a cone408illustrated inFIG.4includes a plurality of potential normal vectors that can be determined based on inter alia the set of captured data points402and where the spatial coordinates of the set of captured data points402respectively fall within spheres with the standard deviation (e.g., while taking into account the measurement error of the LIDAR system). As such, this means that C2,2may be a value representing an extent to which potential normal vectors may deviate from the normal V2in the cone408. It can also be said that C2,2is related to on an angle410of the cone408. It should be noted that the threshold value cTmay be pre-determined by an operator of the electronic device210. For example, the threshold value cTmay be obtained via data-driven optimization techniques. Hence, it is contemplated that if the value C2,2for the given captured data point401is above the threshold value cT, the electronic device210may be configured to exclude the given captured data point401from further processing since the given captured data point401is associated with a normal uncertainty value that is too high (e.g., above the threshold value cT). As previously alluded to, the electronic device210may be configured to compute the matrix C for each captured data point from the given captured 3D point cloud, similarly to how the electronic device210computes the matrix C for the given captured data point401. Hence, the electronic device210may be configured to compute the value C2,2for each captured data point from the given captured 3D point cloud and may compare it to the threshold value cT, similarly to how the electronic device210is configured to compute the value C2,2for the given captured data point401and compare it to the threshold value cT. Therefore, the electronic device210may be configured to filter out, from the given captured 3D point cloud, a set of excluded captured data points with the highest normal uncertainty values by comparing respective values of C2,2against the threshold value cT, while taking into account the uncertainty of respective spatial coordinates caused by the measurement error of the LIDAR system. As a result, by applying the above filtration process400on respective captured data points, the electronic device210is configured to obtain a first filtered 3D point cloud, from the first captured 3D point cloud, which includes data points with precise normal vectors. The electronic device210may then be configured to employ the first filtered 3D point cloud, instead of the first captured 3D point cloud, during subsequent steps of the ICP algorithm. It should be noted that at least some aspects of the present technology related to the filtration process400are discussed in greater detail in APPENDICES A and B of the present specification. It is contemplated that the electronic device210may be configured to determine a second filtered 3D point cloud based on the second captured 3D point cloud in a similar manner to how the electronic device210is configured to determine first filtered 3D point cloud based on the first registered 3D point cloud. Modified Matching Step In a second broad aspect of the present technology, the developers of the present technology have devised methods and devices for improving the matching step of the ICP algorithm. Put another way, in at least some embodiments of the present technology, the electronic device210is configured to perform a modified ICP algorithm during which geometric information regarding the LIDAR system, which captured the two 3D point clouds being merged, is used for rejecting at least some pairs of data points. So-rejecting at least some pairs of data points may allow the electronic device210to ameliorate the efficiency of the merging procedure between the two 3D point clouds. It should be noted that in some embodiments, the electronic device210may be configured to execute this modified matching step of the ICP algorithm onto the first filtered 3D point cloud and the second filtered 3D point cloud, as described above, instead of the first captured 3D point cloud and the second captured 3D point cloud. However, in alternative non-limiting embodiments of the present technology, only one of the modified steps (i.e. either modified filtering step or modified matching step) may be executed. Optionally, it is contemplated that the electronic device210may be configured to perform this modified matching step of the ICP algorithm onto filtered 3D point clouds that have been filtered by the electronic device210during a selection step of the ICP algorithm in a different manner to what is described above—that is, the electronic device210may be configured to execute this modified matching step onto 3D point clouds having been determined in any suitable manner for a given application. Broadly speaking, the purpose of the modified matching step being executed by the electronic device210is to (i) determine pairs of matched pairs of data points (also referred to as “correspondences”), and (ii) to “reject” some of these pairs so as to ameliorate the efficiency of the merging procedure. Therefore, it can be said that the electronic device210may be configured to employ the first filtered 3D point cloud and the second filtered 3D point cloud for matching data points into respective pairs, such that each pair includes (i) given first data point from the first filtered 3D point cloud and (ii) a given second data point from the second filtered 3D point cloud. By so doing, the electronic device210may be configured to determine correspondences between the two filtered 3D point clouds. The electronic device210may determine initial correspondences between data points in a variety of ways. In one example, the electronic device210may be configured to determine the initial pairs of data point based on a shortest distance therebetween. In another example, the electronic device210may be configured to determine pairs of data points, not only based on the shortest distance therebetween, but also based on an analysis of respective normal vectors. Nevertheless, irrespective of how the electronic device210is configured to determine the initial pairs of data points between the first filtered 3D point cloud and the second filtered 3D point cloud (e.g., the initial correspondences), the electronic device210may be configured to employ the geometric correspondence rejector (GCR) for “rejecting” at least some initial pairs of data points, thereby determining a reduced plurality of pairs of data points to be used during a next step of the ICP algorithm. As mentioned above, the GCR may be met when a distance between data points of a given initial pair of data points is below a geometry-based threshold. The geometry-based threshold may be referred herein as a neighbour beam distance. As it will become apparent from the description herein further below, in at least some embodiments of the present technology, the geometry-based threshold may be a longest neighbour beam distance amongst a set of neighbour beam distances determined for a given data point. How the electronic device210may be configured to determine this geometry-based threshold will now be described. It is contemplated that the electronic device210may be configured to determine a plurality of initial pairs (initial correspondences) between the two 3D point clouds, and then, may be configured to use at least one neighbour beam distance for respective initial pairs to determine a reduced plurality of pairs between the two 3D point clouds. For example, the electronic device210may be configured to determine: M={∀{tilde over (m)}k∈{tilde over (M)}:IsInlier(pk,p′k;{tilde over (m)}k}  (7) where M is the reduced plurality of pairs, {tilde over (M)} is the plurality of initial pairs (prior to rejection of at least some initial pairs), {tilde over (m)}kbeing a given initial pair from the plurality of initial pairs, pkbeing a given first data point from the first 3D point cloud and being in the given initial pair {tilde over (m)}k, and p′kbeing a given second data point from second 3D point cloud and being in the given initial pair {tilde over (m)}k. It should be noted that the IsInlier may be a computer-implemented procedure performed by the electronic device210and makes use of the geometry-based threshold for determining whether or not a given initial pair is to be considered as an “inlier”, and therefore should be included in the reduced plurality of pairs, or whether the given initial pair is to be considered as an “outlier”, and therefore should be excluded from the reduced plurality of pairs. Hence, it can be said that the IsInlier is, in a sense, a computer-implemented “test” that the electronic device210may employ for determining the reduced plurality of pairs. It should be noted that the IsInlier test is described in greater detail in the APPENDICES A and B. To better illustrate the IsInlier test, it should be noted that a given one of the plurality of initial pairs is characterized by: ∥{tilde over (m)}k∥=∥pk−p′k∥  (8) where ∥{tilde over (m)}k∥ is the Euclidean distance between the data points in the given initial pair. It is contemplated that a point-to-point distance between the data points in the given initial pair may correspond to the Euclidean distance between these data points. As such, it can be said that the IsInlier test follows the following logic: ∥{tilde over (m)}k∥<d(pk,pl)  (9) where the a given d(pk,pl), or shortly dk,l, is a given neighbour beam distance for the first given data point pkfrom the given initial pair {tilde over (m)}k. Put another way, the Equation (9) means that the purpose of the IsInlier test performed by the electronic device210is to compare (i) the Euclidean distance between the initial pair of data points against an upper bound of a number neighbour beam distances for the first data point pk. In other words, the purpose of the IsInlier test performed by the electronic device210is to compare (i) the point-to-point distance between the initial pair of data points against a longest one of a number neighbour beam distances for the first data point pk. To better illustrate this, reference will now be made toFIG.5depicted a procedure500for determining the geometry-based threshold (e.g., the longest one of the number of neighbour beam distances) for a given first data point from a given initial pair. As seen in theFIG.5, let it be assumed that a first data point512corresponds to the first data point pk. Also let it be assumed, as described above, that during the modified matching step, the electronic device210has paired the first data point512with a given second data point p′kfrom an other 3D point cloud, and that the Euclidean distance (point-to-point distance) between this initial pair of data points pkand p′kis as defined by the Equation (8). As such, the electronic device210may be configured to determine a geometry-based threshold for the first data point512for executing the IsInlier test onto the initial pair of data points pkand p′k. To that end, the electronic device210may be configured to identify neighbouring data points (also referred herein as “neighbour data points or “neighbours”) from the first 3D point cloud to the first data point512. As depicted inFIG.5, these neighbour data points are first data points522,532,542, and552. Thus, it can be said that the first given data point512(pk) may have four neighbours denoted as pl, where l∈{0 . . . 3}. In other words, the neighbour data points include the first data point522(pl=0), the first data point532(pl=1), the first data point542(pl=2), and the first data point552(pl=3). It should be noted that these neighbour data point522,532,542, and552are produced by lasers of the rotational LIDAR system that are vertically adjacent to the laser having produced the first given data point512(pk). For example, the neighbour data points522and532are registered via a first given laser that produces beams520and530, the first data point512(pk) is registered via a second given laser that produces beam510, and the neighbour data points542and552are registered via a third given laser that produces beams550. Conventionally, it should be noted that for rotational LIDAR systems, a level of each laser of the rotational LIDAR system is called a “ring”, so if pkis part of the ring rj, the neighbour data points plwhere l∈{0 . . . 3} are on adjacent rings rj−1and rj+1. Put another way, the first given laser corresponds to a ring580, the second given laser corresponds to a ring581, and the third given laser corresponds to a ring582. Also, it should be noted that (i) “left” neighbour data points522(pl=0) and552(pl=3) are captured at a previous increment of LIDAR azimuthal rotation, (ii) the first data point512(pk) is captured at a current increment of LIDAR azimuthal rotation, and (iii) “right” neighbour data points532(pl=1) and542(p1=2) are captured at a next increment of LIDAR azimuthal rotation. It should be noted that the electronic device210may be configured to acquire angular distances between the first data point512(pk) and each one of the neighbour data points522,532,542, and552. For example, for a calibrated rotational LIDAR system, the electronic device210may be configured to acquire calibration parameters such as: ϕ being an angular increment of the LIDAR system azimuthal rotation, and θj−1and θj+1being angular distances in pitch between the rings (from ring581to ring580and from ring581to ring582, respectively). It can be said that the first data point512(pk) and the neighbours522,532,542, and552(plwhere l∈{0 . . . 3}) may be represented on a segment of the unit-sphere. Put another way, the first data point512(pk) and the neighbouring data points522,532,542, and552define a segment of the unit-sphere. This segment could be approximated by a plane502inFIG.5, in turn assuming neighbor beams parallel It should be noted that such approximation assumes that vertically adjacent laser beams510,520,530, and550are parallel. Developers of the present technology have realized that such an assumption is admissible for calibration parameters being near null values, such as ϕ≈0.08° and θ≈0.26°. It should be noted that a diagonal vector pk,lbetween the first data point512(pk) and a given neighbour plon the plane502is defined as follows: pk,l=[ϕ·u±θj±1,j·v]·∥pk∥  (10) where u and v are unitary vectors, and where u is aligned on the plane502with the direction of azimuthal rotation of the LIDAR system, and where v perpendicular to u and is aligned on the plane502with the vertical direction. In other words, u and v define absciss and ordinate axes of pk,lcoordinate frame. Additional information regarding how the electronic device210may be configured to determine u and v vectors is described in greater detail in the APPENDICES A and B. For example, the electronic device210may be configured to determine by using the Equation 10, (i) for the neighbour data point522(pl=0) a respective diagonal vector540(pk,l=0), for the neighbour data point522(pl=3) a respective diagonal vector541(pk,l=3), and so on. It should be noted that diagonal vectors (pk,l=1and pk,l=2) for the neighbour data point532(pl=1) and for the neighbour data point542(pl=2), respectively, are not depicted inFIG.5for clarity of illustration only. Once the diagonal vector pk,lfor a given neighbour data point plis determined by the electronic device210, the electronic device210may determine a neighbour beam distance dk,lbetween (i) the first data point512pkand a (ii) respective neighbour data point plvia the following: dk,l=pk,l2pk,l2-〈pk,l,np〉2(12) where dk,lis a projection of a respective diagonal vector pk,lonto a reflecting surface504and which is orthogonal to normal nkalong the laser beam flight direction. It should be noted that the electronic device210may be configured to use information regarding the normal vector associated with the first data point512pkand/or a given normal vector associated with any one of the neighbour data points as the normal nkin the Equation (12). Put another way, the electronic device210may be configured to determine via the Equation (12), the number of neighbour beam distances for the first data point512pk, which include: (i) a neighbour beam distance528dk,l=0for the neighbour data point522pl=0, (i) a neighbour beam distance558dk,l=3for the neighbour data point522pl=3, and so on. It should be noted that neighbour beam distances dk,l=1and dk,l=2are not depicted inFIG.5for clarity of illustration only. The electronic device210may then be configured to identify the largest one amongst the four neighbour beam distances dk,l=0, dk,l=1, dk,l=2, and dk,l=3asd(pk,pl) from the Equation (9). In other words, the electronic device210may be configured to identify the largest one of the four neighbour beam distances dk,l=0, dk,l=1, dk,l=2, and dk,l=3as the geometry-based threshold for the first data point512pk. Then, the electronic device210may employ the Equation (9) for determining whether the point-to-point distance between the first data point512pkand the given second data point p′kis above or below the geometrically-based threshold for the first data point512pk. It should be noted that at least some aspects of the present technology are discussed in greater detail in APPENDICES A and B of the present specification. Rejecting Pairs of Data Points by Using the GCR To better illustrate this, let it be assumed that ∥mk∥ (the point-to-point distance between the given initial pair of data points) is below the geometrically-based threshold for the first data point512pk(the largest one of the four neighbour beam distances dk,l=0, dk,l=1, dk,l=2, and dk,l=3). In such a case, the electronic device210may be configured to keep the given initial pair of data points for further processing. Now let it be assumed that ∥mk∥ (the point-to-point distance between the given initial pair of data points) is above the geometry-based threshold for the first data point512pk(the largest one of the four neighbour beam distances dk,l=0, dk,l=1, dk,l=2, and dk,l=3). In such a case, the electronic device210may determine that the point-to-point distance between the given initial pair of data points is to long, and hence, the electronic device210may be configured to exclude the given initial pair of data points from further processing. In some embodiments of the present technology, the electronic device210may be configured to execute a method600of processing LIDAR sensor data, as depicted inFIG.6. In other embodiments, the electronic device210may be configured to execute a method700of processing LIDAR sensor data, as depicted inFIG.7. How the electronic device210may be configured to execute the methods600and the methods700will now be discussed in turn. STEP602: Receiving, from the LIDAR Sensor, a First Dataset Having a Plurality of First Data Points The method600begins at step602with the electronic device210configured to receive the first dataset from the LIDAR sensor which has a plurality of first data points. For example, the first dataset may include the first captured 3D point cloud, as mentioned above. It should be noted that this first dataset is captured by the LIDAR sensor and which may be biased by the measurement error of the LIDAR sensor itself. It should also be noted that a given first data point from the plurality of first data point is representative of respective spatial coordinates in a 3D space. The given first data point is also associated with a respective normal vector from a plurality of normal vectors. For example, a respective one of the plurality of first data points may be associated with a respective one of the plurality of normal vectors. It is contemplated that the LIDAR sensor may be mounted to a Self-Driving Car (SDC), such as for example, the vehicle220depicted inFIG.2. STEP604: Determining, by the Electronic Device, an Uncertainty Parameter for the Given First Data Point Based on a Normal Covariance of the Normal Vector of the Given First Data Point which Takes into Account a Measurement Error of the LIDAR Sensor when Determining the Respective Coordinates of the Given First Data Point The method600continues to step604with the electronic device210being configured to determine an uncertainty parameter for the given first data point based on a normal covariance of the normal vector of the given first data point, which normal covariance takes into account a measurement error of the LIDAR sensor when determining the respective spatial coordinates of the given first data point. It is contemplated that the uncertainty parameter for given first data point may correspond to the value C2,2for the given first data point and may be obtained via the Equation (5) above. It should be noted that, in order to determine the value C2,2, the electronic device210may be configured to estimate Cov[V2] (e.g., covariance of the normal vector406) by propagation of the uncertainty technique summarized in the Equations (3) and (4). It should be noted that the estimation of Cov[V2] depends on the measurement error of the LIDAR system during determination of spatial coordinates of captured data points (see the Equation (4)). Once the electronic device210estimates the Cov[V2], the electronic device210may be configured to align it with the normal vector V2—that is, the electronic device210is configured to perform the alignment step for Cov[V2] as defined in the Equation (5). The result of the alignment step performed by the electronic device210yields the matrix C which is the covariance of V2in a normal-aligned coordinate system (e.g., coordinate system defined by V0, V1, and V2). The electronic device210may then be configured to identify the value C2,2from the matrix C as the uncertainty parameter for the given first data point. In some embodiments, the electronic device210may be configured to determine the normal covariance of the normal vector of the given first data point while taking into account an uncertainty in the respective coordinates of the given first data point. It is contemplated that the uncertainty may be approximated as a Gaussian random variable. It is further contemplated that the measurement error of the LIDAR sensor may be approximated by a sphere with a standard deviation (see Equation the (4)). STEP606: In Response to the Uncertainty Parameter being Above a Pre-Determined Threshold, Excluding the Given First Data Point from the Plurality of First Data Points The method600continues to step606with the electronic device210configured to, in response to the uncertainty parameter for the given first data point being above a pre-determined threshold, exclude the given first data point from the plurality of first data points. For example, the pre-determined threshold may correspond to the threshold value cTfrom the Equation (7). It should be noted that if the uncertainty parameter (e.g., C2,2) for the given first data point is above the threshold value cT, the electronic device210may be configured to exclude the given first data point from further processing during the ICP algorithm. Therefore, by performing the step608, the electronic device210may be configured to determine a filtered plurality of first data points based on the first plurality of data points. It can also be said that the electronic device210is thus configured to determine the first filtered 3D point cloud based on the first captured 3D point cloud. STEP608: Using the Filtered Plurality of First Data Points, Instead of the Plurality of First Data Points, for Merging the First Dataset of the LIDAR Sensor with a Second Dataset of the LIDAR Sensor The method600continues to step608with the electronic device210being configured to use the filtered plurality of first data points (determined during the step606), instead of the plurality of first data points (received during the step602), for merging the first dataset of the LIDAR sensor with a second dataset of the LIDAR sensor. In some embodiments, it is contemplated that the electronic device210may be configured to use the filtered plurality of first data points, instead of the plurality of first data points, during the matching step of the ICP algorithm. For example, the electronic device may be configured to estimate a transformation rule between the first dataset and the second dataset during the ICP algorithm based on the first filtered plurality of first data points. It is contemplated that the transformation rule may be an output of the ICP algorithm. In addition, in some embodiments of the present technology, the electronic device210may also use the merged first and second datasets for controlling operation of the SDC. As mentioned above, the electronic device210may be configured to execute the method700that will now be described in greater details. STEP702: Receiving, from the LIDAR Sensor, an Indication of the LIDAR Sensor Data Including a First Dataset Having a Plurality of First Data Points and a Second Dataset Having a Plurality of Second Data Points The method700begins at step702with the electronic device210configured to receive, from the LIDAR sensor, an indication of the LIDAR sensor data including a first dataset having a plurality of first data points and a second dataset having a plurality of second data points. It should be noted that each one from the plurality of first data points and each one from the plurality of second data points is representative of respective spatial coordinates in a 3D space and is associated with a respective normal vector from a plurality of normal vectors. It should be noted that the first dataset and the second dataset (e.g., including a first 3D point cloud and a second 3D point cloud) may have been captured during sequential scanning phases of the LIDAR sensor. STEP704: Matching at Least Some of the Plurality of First Data Points with at Least Some of the Plurality of Second Data Points, Thereby Determining a Plurality of Pairs The method700continues to step704with the electronic device210configured to match at least some of the plurality of first data points with at least some of the plurality of second data points, thereby determining a plurality of pairs. For example, by performing this matching, the electronic device210may be configured to determine the plurality of initial pairs, as mentioned above. This means that the electronic device210may be configured to determine {tilde over (M)} as defined in the Equation (7), where {tilde over (M)} is the plurality of initial pairs (prior to rejection of at least some initial pairs). It should be noted that a given one of the plurality of these initial pairs includes (i) a given first data point and (ii) a given second data point, and where the given first data point and the given second data points are separated by a point-to-point actual distance. For example, a given one of the plurality of these initial pairs may be defined as {acute over (m)}kthat represents a given initial pair from the plurality of initial pairs. The given initial pair {acute over (m)}kincludes (i) the given one of the first plurality of data points pkfrom the first 3D point cloud, and (ii) the corresponding one of the plurality of second data points pkfrom second 3D point cloud. In this example, the point-to-point actual distance between pkand p′kmay be defined as the Euclidean distance therebetween (see the Equation (8)). Hence, it is contemplated that the point-to-point actual distance may be a Euclidean distance between the respective initial pair of data points in the 3D space. It should be noted that the electronic device210may be configured to perform such matching in a variety of ways. For example, the electronic device210may be configured to perform the matching in a different manner for a given application. STEP706: For the Given One of the Plurality of Pairs, Determining a Pair-Specific Filtering Parameter The method700continues to step706with the electronic device210configured to determining a pair-specific filtering parameter for the given one of the plurality of (initial) pairs that is determined during the step704. For example, the electronic device210may be configured to determine the pair-specific filtering parameter to be positive for a given initial pair of data points if the point-to-point distance (e.g., the Euclidean distance between the initial pair of data points) is above the geometry-based threshold for the given initial pair of data points. It is contemplated that the electronic device210may be configured to determine the pair-specific filtering parameter by defining/identifying for the given second data point in the given initial pair, a set of neighbouring data points (e.g., see first data points522,532,542, and552inFIG.5). For example, this set of neighbouring points is associated with a subset of lasers of the plurality of lasers vertically adjacent to a given laser that has been instrumental in generating the given first data point from the given initial pair. As explained above, this subset of lasers has been instrumental in generating the set of neighbouring data points. In at least some embodiments of the present technology, the set of neighbouring data points may comprise four neighbouring data points. For example, as depicted inFIG.5, the four neighbouring data point may comprise two data points vertically above the given first data point and two data points vertically below the given first data point (e.g., located on an upper ring and on a lower ring from a ring of the given second data point, respectively). In some embodiments, it is contemplated that the given first data point and the set of neighbouring points define a segment of a unit sphere. It is also contemplated that the approximation plane generated by the electronic device210may be based on an assumption that laser beams generated by the plurality of lasers are parallel. As explained above, it can be said that the first data point512(pk) and the neighbouring data points522,532,542, and552(plwhere l∈{0 . . . 3}) may be represented on a segment of the unit-sphere. Put another way, the first data point512(pk) and the neighbouring data points522,532,542, and552define a segment of the unit-sphere. This segment could be approximated by a plane502inFIG.5, in turn assuming neighbor beams parallel It should be noted that such approximation assumes that vertically adjacent laser beams510,520,530, and550are parallel. Developers of the present technology have realized that such an assumption is admissible for calibration parameters being near null values, such as ϕ≈0.08° and θ≈0 0.26°. It is contemplated that the electronic device210may be configured to calculate the neighbour beam distances between the given first data point512and respective ones the set of neighbouring points522,532,542, and552. A given neighbour beam distance dk,lis representative of a linear distance between the given first data point512and a respective one of the set of neighbouring points (pl). It is contemplated that, in order to calculate the calculating the neighbour beam distances, the electronic device210may be configured to generate (i) a given diagonal vector pk,lbetween the given first data point512pkand a respective one of the set of neighbouring points in a first coordinate system (unitary vectors u and v), and (ii) calculate the respective neighbour beam distance dk,lby projecting the given diagonal vector pk,lonto the reflecting surface504orthogonal to a laser path direction. For example, the electronic device210may be configured to calculate the neighbour beam distances dk,l=0, dk,l=1, dk,l=2, and dk,l=3via the Equation (12). It is further contemplated that the electronic device210may calculate the neighbour beam distances between the given first data point512and the set of neighbouring points based on an angular increment of azimuthal rotation and angular distances in pitch between vertically spaced lasers (e.g., ϕ and θ). Then, it is contemplated that the electronic device210may be configured to identify a largest neighbour beam distance amongst all the neighbourhood beam distances (in this case, amongst (dk,l=0, dk,l=1, dk,l=2, and dk,l=3). Then, it is contemplated that the electronic device210may be configured to determine, in response to the point-to-point actual distance being above the largest neighbour beam distance, that the pair-specific filtering parameter is to be positive. STEP708: In Response to the Pair-Specific Parameter being Positive, Excluding the Given One of the Plurality of Pairs from Further Processing The method700continues to step708with the electronic device210configured to discard/exclude the given one of the plurality of (initial) pairs (in the examples above the initial pair including pkand p′k) from further processing. The electronic device210is thereby configured to define a reduced plurality of pairs (e.g., excluding at least the initial pair pkand p′k) STEP710: Processing the Reduced Subset of Pairs for Merging the First Dataset and the Second Dataset The method700continues to step710with the electronic device210configured to process the reduced plurality of pairs for merging the first dataset and the second dataset. In some embodiments, the electronic device210may be configured to (as part of the processing of the reduced plurality of pairs) estimate a transformation rule between the first dataset and the second dataset. For example, the transformation rule may be output of an ICP algorithm performed by the electronic device. In addition, the electronic device210may be configured to use the merged first and second datasets for controlling operation of the vehicle220. Modifications and improvements to the above-described implementations of the present technology may become apparent to those skilled in the art. The foregoing description is intended to be exemplary rather than limiting. The scope of the present technology is therefore intended to be limited solely by the scope of the appended claims.
70,728
11860282
DETAILED DESCRIPTION The present disclosure describes various examples of LIDAR systems and methods for detecting and mitigating the effects of obstructions on LIDAR windows. According to some embodiments, the described LIDAR system may be implemented in any sensing market, such as, but not limited to, transportation, manufacturing, metrology, medical, and security systems. According to some embodiments, the described LIDAR system is implemented as part of a front-end of frequency modulated continuous-wave (FMCW) device that assists with spatial awareness for automated driver assist systems, or self-driving vehicles. FIG.1illustrates a LIDAR system100according to example implementations of the present disclosure. The LIDAR system100includes one or more of each of a number of components, but may include fewer or additional components than shown inFIG.1. As shown, the LIDAR system100includes optical circuits101implemented on a photonics chip. The optical circuits101may include a combination of active optical components and passive optical components. Active optical components may generate, amplify, and/or detect optical signals and the like. In some examples, the active optical component includes optical beams at different wavelengths, and includes one or more optical amplifiers, one or more optical detectors, or the like. Free space optics115may include one or more optical waveguides to carry optical signals, and route and manipulate optical signals to appropriate input/output ports of the active optical circuit. The free space optics115may also include one or more optical components such as taps, wavelength division multiplexers (WDM), splitters/combiners, polarization beam splitters (PBS), collimators, couplers or the like. In some examples, the free space optics115may include components to transform the polarization state and direct received polarized light to optical detectors using a PBS, for example. The free space optics115may further include a diffractive element to deflect optical beams having different frequencies at different angles along an axis (e.g., a fast-axis). In some examples, the LIDAR system100includes an optical scanner102that includes one or more scanning mirrors that are rotatable along an axis (e.g., a slow-axis) that is orthogonal or substantially orthogonal to the fast-axis of the diffractive element to steer optical signals to scan an environment according to a scanning pattern. For instance, the scanning mirrors may be rotatable by one or more galvanometers. The optical scanner102also collects light incident upon any objects in the environment into a return optical beam that is returned to the passive optical circuit component of the optical circuits101. For example, the return optical beam may be directed to an optical detector by a polarization beam splitter. In addition to the mirrors and galvanometers, the optical scanner102may include components such as a quarter-wave plate, lens, anti-reflective coated window or the like. To control and support the optical circuits101and optical scanner102, the LIDAR system100includes LIDAR control systems110. The LIDAR control systems110may include a processing device for the LIDAR system100. In some examples, the processing device may be one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processing device may be complex instruction set computing (CISC) microprocessor, reduced instruction set computer (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or processor implementing other instruction sets, or processors implementing a combination of instruction sets. The processing device may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. In some examples, the LIDAR control systems110may include a signal processing unit112such as a DSP. The LIDAR control systems110are configured to output digital control signals to control optical drivers103. In some examples, the digital control signals may be converted to analog signals through signal conversion unit106. For example, the signal conversion unit106may include a digital-to-analog converter. The optical drivers103may then provide drive signals to active optical components of optical circuits101to drive optical sources such as lasers and amplifiers. In some examples, several optical drivers103and signal conversion units106may be provided to drive multiple optical sources. The LIDAR control systems110are also configured to output digital control signals for the optical scanner102. A motion control system105may control the galvanometers of the optical scanner102based on control signals received from the LIDAR control systems110. For example, a digital-to-analog converter may convert coordinate routing information from the LIDAR control systems110to signals interpretable by the galvanometers in the optical scanner102. In some examples, a motion control system105may also return information to the LIDAR control systems110about the position or operation of components of the optical scanner102. For example, an analog-to-digital converter may in turn convert information about the galvanometers' position to a signal interpretable by the LIDAR control systems110. The LIDAR control systems110are further configured to analyze incoming digital signals. In this regard, the LIDAR system100includes optical receivers104to measure one or more beams received by optical circuits101. For example, a reference beam receiver may measure the amplitude of a reference beam from the active optical component, and an analog-to-digital converter converts signals from the reference receiver to signals interpretable by the LIDAR control systems110. Target receivers measure the optical signal that carries information about the range and velocity of a target in the form of a beat frequency, modulated optical signal. The reflected beam may be mixed with a second signal from a local oscillator. The optical receivers104may include a high-speed analog-to-digital converter to convert signals from the target receiver to signals interpretable by the LIDAR control systems110. In some examples, the signals from the optical receivers104may be subject to signal conditioning by signal conditioning unit107prior to receipt by the LIDAR control systems110. For example, the signals from the optical receivers104may be provided to an operational amplifier for amplification of the received signals and the amplified signals may be provided to the LIDAR control systems110. In some applications, the LIDAR system100may additionally include one or more imaging devices108configured to capture images of the environment, a global positioning system109configured to provide a geographic location of the system, or other sensor inputs. The LIDAR system100may also include an image processing system114. The image processing system114can be configured to receive the images and geographic location, and send the images and location or information related thereto to the LIDAR control systems110or other systems connected to the LIDAR system100. In operation according to some examples, the LIDAR system100is configured to use nondegenerate optical sources to simultaneously measure range and velocity across two dimensions. This capability allows for real-time, long range measurements of range, velocity, azimuth, and elevation of the surrounding environment. In some examples, the scanning process begins with the optical drivers103and LIDAR control systems110. The LIDAR control systems110instruct the optical drivers103to independently modulate one or more optical beams, and these modulated signals propagate through the passive optical circuit to the collimator. The collimator directs the light at the optical scanning system that scans the environment over a preprogrammed pattern defined by the motion control system105. The optical circuits101may also include a polarization wave plate (PWP) to transform the polarization of the light as it leaves the optical circuits101. In some examples, the polarization wave plate may be a quarter-wave plate or a half-wave plate. A portion of the polarized light may also be reflected back to the optical circuits101. For example, lensing or collimating systems used in LIDAR system100may have natural reflective properties or a reflective coating to reflect a portion of the light back to the optical circuits101. Optical signals reflected back from the environment pass through the optical circuits101to the receivers. Because the polarization of the light has been transformed, it may be reflected by a polarization beam splitter along with the portion of polarized light that was reflected back to the optical circuits101. Accordingly, rather than returning to the same fiber or waveguide as an optical source, the reflected light is reflected to separate optical receivers. These signals interfere with one another and generate a combined signal. Each beam signal that returns from the target produces a time-shifted waveform. The temporal phase difference between the two waveforms generates a beat frequency measured on the optical receivers (photodetectors). The combined signal can then be reflected to the optical receivers104. The analog signals from the optical receivers104are converted to digital signals using ADCs. The digital signals are then sent to the LIDAR control systems110. A signal processing unit112may then receive the digital signals and interpret them. In some embodiments, the signal processing unit112also receives position data from the motion control system105and galvanometers (not shown) as well as image data from the image processing system114. The signal processing unit112can then generate a3D point cloud with information about range and velocity of points in the environment as the optical scanner102scans additional points. The signal processing unit112can also overlay a3D point cloud data with the image data to determine velocity and distance of objects in the surrounding area. The system also processes the satellite-based navigation location data to provide a precise global location. FIG.2is a time-frequency diagram200of an FMCW scanning signal201that can be used by a LIDAR system, such as system100, to scan a target environment according to some embodiments. In one example, the scanning waveform201, labeled as fFM(t), is a sawtooth waveform (sawtooth “chirp”) with a chirp bandwidth ΔfCand a chirp period TC. The slope of the sawtooth is given as k=(ΔfC/TC).FIG.2also depicts target return signal202according to some embodiments. Target return signal202, labeled as fFM(t−Δt), is a time-delayed version of the scanning signal201, where Δt is the round trip time to and from a target illuminated by scanning signal201. The round trip time is given as Δt=2R/v, where R is the target range and v is the velocity of the optical beam, which is the speed of light c. The target range, R, can therefore be calculated as R=c(Δt/2). When the return signal202is optically mixed with the scanning signal, a range dependent difference frequency (“beat frequency”) ΔfR(t) is generated. The beat frequency ΔfR(t) is linearly related to the time delay Δt by the slope of the sawtooth k. That is, ΔfR(t)=kΔt. Since the target range R is proportional to Δt, the target range R can be calculated as R=(c/2)(ΔfR(t)/k). That is, the range R is linearly related to the beat frequency ΔfR(t). The beat frequency ΔfR(t) can be generated, for example, as an analog signal in optical receivers104of system100. The beat frequency can then be digitized by an analog-to-digital converter (ADC), for example, in a signal conditioning unit such as signal conditioning unit107in LIDAR system100. The digitized beat frequency signal can then be digitally processed, for example, in a signal processing unit, such as signal processing unit112in system100. It should be noted that the target return signal202will, in general, also includes a frequency offset (Doppler shift) if the target has a velocity relative to the LIDAR system100. The Doppler shift can be determined separately, and used to correct the frequency of the return signal, so the Doppler shift is not shown inFIG.2for simplicity and ease of explanation. It should also be noted that the sampling frequency of the ADC will determine the highest beat frequency that can be processed by the system without aliasing. In general, the highest frequency that can be processed is one-half of the sampling frequency (i.e., the “Nyquist limit”). In one example, and without limitation, if the sampling frequency of the ADC is 1 gigahertz, then the highest beat frequency that can be processed without aliasing (ΔfRmax) is 500 megahertz. This limit in turn determines the maximum range of the system as Rmax=(C/2)(ΔfRmax/k) which can be adjusted by changing the chirp slope k. In one example, while the data samples from the ADC may be continuous, the subsequent digital processing described below may be partitioned into “time segments” that can be associated with some periodicity in the LIDAR system100. In one example, and without limitation, a time segment might correspond to a predetermined number of chirp periods T, or a number of full rotations in azimuth by the optical scanner. FIG.3is a block diagram illustrating an example optical system300according to some embodiments. Optical system300may include an optical scanner301, similar to the optical scanner102illustrated and described in relation toFIG.1. Optical system300may also include an optical processing system302, which may include elements of free space optics115, optical circuits101, optical drivers103, optical receivers104and signal conversion unit106, for example. Optical processing system302may include an optical source303to generate a frequency-modulated continuous-wave (FMCW) optical beam304. The optical beam304may be directed to an optical coupler305, that is configured to couple the optical beam304to a polarization beam splitter (PBS)306, and a sample307of the optical beam304to a photodetector (PD)308. The PBS306is configured to direct the optical beam304, because of its polarization, toward the optical scanner301. Optical scanner301is configured to scan a target environment with the optical beam304, through a range of azimuth and elevation angles covering the field of view (FOV) of a LIDAR window309. InFIG.3, for ease of illustration, only the azimuth scan is illustrated. As shown inFIG.3, at one azimuth angle (or range of angles), the optical beam304may pass through the LIDAR window309unobstructed and illuminate a target310. A return signal311-1from the target310will pass unobstructed through LIDAR window309and be directed by optical scanner301back to the PBS306. At a later time in the scan (i.e., increased azimuth angle), the optical beam304may be directed by optical scanner301to a location on the LIDAR window309that is obstructed or partially obstructed by an obstruction312. As a result, the optical beam304will pass through the LIDAR window309and be reflected or partially reflected by the obstruction312. A return signal311-2from the obstruction312will pass back through the LIDAR window309and be directed by optical scanner301back to the PBS306. Also illustrated inFIG.3is an attenuated optical beam304A that represents an attenuated portion of optical beam304that is not reflected or absorbed by obstruction312. For simplicity, the attenuated optical beam304A is not shown to be illuminating another target. The combined return signal311(a time domain signal that includes the return signal311-1from target310and return311-2from obstruction312), which will have a different polarization than the optical beam304due to reflection from the target310or the obstruction312, is directed by the PBS306to the photodetector (PD)308. In PD308, the combined return signal311is spatially mixed with the local sample307of the optical beam304to generate a range-dependent baseband signal313in the time domain. The range-dependent baseband signal313is the frequency difference between the local sample of307and the combined return signal311versus time (i.e., ΔfR(t)). FIG.4is a time-magnitude plot of an example of the time domain range-dependent baseband signal313produced by embodiments of the present disclosure. InFIG.4, the horizontal axis can represent time or the azimuth scan angle which is a function of time. Given the scanning direction illustrated and described with respect toFIG.3, the optical beam304is first reflected by the target310as illustrated inFIG.4, and then reflected by the obstruction312. It should be noted that, while the magnitude of the signal due to the obstruction312inFIG.4is shown as larger than the signal due to target310, that may not always be the case, depending on the range of the target and the coefficient of reflection of the obstruction312. As described below inFIG.5, for example, signal processing systems described by embodiments of the present disclosure process the information depicted inFIG.4to produce discrete time domain sequences that can be used for further processing. FIG.5is a block diagram illustrating an example signal processing system500according to embodiments of the present disclosure. Signal processing system500may include all or part of one or more components described above with respect toFIG.1, including without limitation, signal conversion unit106, signal conditioning unit107, LIDAR control systems110, signal processing unit112and motion control system105. Each of the functional blocks in signal processing system500may be realized in hardware firmware, software or some combination of hardware, firmware and software. InFIG.5, the range-dependent baseband signal313generated from optical processing system300is provided to a time domain sampler501that converts the continuous range-dependent baseband signal313into a discrete time domain sequence502. The discrete time sequence502is provided to a discrete Fourier transform (DFT) processor503that transforms the discrete time domain sequence502into a discrete frequency domain sequence504. It will be appreciated that in the discrete frequency domain sequence504, the frequencies associated with obstruction312are lower than those frequencies associated with the target310because the round trip time to the obstruction312and back is less than the round trip time to the target310and back, and that the beat frequency in the range-dependent baseband signal is proportional to range. That is, ΔfR(t)=kΔt, where ΔfRis the beat frequency, Δt is the round trip travel time, and k is the slope of the chirp waveform as described above with respect toFIG.2. Since the geometry of the LIDAR system is known, including the distance of the LIDAR window from the optical source, it can be determined that any beat frequencies below a given threshold frequency, such as threshold frequency505inFIG.5, are associated with a LIDAR window obstruction. The discrete frequency domain sequence504is provided to a peak search processor506that searches for and identifies energy peaks in the frequency domain to identify both target returns (e.g., target return311-1) and returns from LIDAR window obstructions (e.g., return311-2from obstruction312). In one example, signal processing system500also includes a frequency compensation processor507to correct and or remove frequency artifacts introduced by the system. For example, in some scenarios, the scanning process itself can introduce Doppler frequency shifts due to the high-speed rotation of mirrors in the optical scanner301. After frequency compensation by the frequency compensation processor507, the information provided by the peak search processor506, including an energy-frequency profile of the return signal as reflected in the range-dependent baseband signal313, is provided to a post-processor508. Post-processor508, using processing instructions stored in memory509, combines the energy-frequency profile of the return signal with azimuth and elevation data from, for example, motion control system105from LIDAR system100, to generate a reflectivity map of the field of view (FOV) of the LIDAR window and an overall window health report. FIG.6is an example of a reflectivity map600. Example reflectivity map600includes a plot of a contiguous obstruction601. Each point in the obstruction601represents a pixel in the FOV of a LIDAR window, such as LIDAR window309in the optical processing system300inFIG.3, where each pixel is associated with an azimuth angle and an elevation angle. In the example ofFIG.6, the azimuth angle spans 100 degrees between −50 degrees and +50 degrees, and the elevation angle spans 60 degrees between −30 degrees and +30 degrees. The reflectivity of the LIDAR window can be plotted as a reflectivity contour at each elevation angle and each azimuth angle. For example, inFIG.6, the reflectivity versus azimuth angle at an elevation angle of 20 degrees is illustrated by reflectivity contour602. Similarly, the reflectivity versus elevation angle at an azimuth angle of −30 degrees is illustrated by reflectivity contour603. The reflectivity map and the window health report may be used by the signal processing unit112and LIDAR control systems110to determine any operational effects of the obstruction or obstructions, and to trigger any actions needed to mitigate the operational effects. According to some embodiments, a window health report includes details concerning which portions of the FOV of window309are blocked and the degree to which they are blocked. For example, the window health report may include information related to whether the LIDAR system100has enough visibility to operate safely. According to some embodiments, if the window health report indicates an unsafe operating state due to the level of window obstruction, the LIDAR control system110may direct the vehicle or other platform carrying the LIDAR system100to park in a safe location where corrective action may be taken. The reflectivity map and window health report can be used to map obstructed or impaired fields of view, to calculate maximum detection ranges in the impaired FOVs based on the amount of reflected energy from the obstruction(s) and the signal-to-noise ratios in the return signal. The signal processing unit112can use this information to determine if an obstructed FOV is a safety critical FOV such as the forward-looking field of view of a moving vehicle, for example. The signal processing system500can also determine if a reduction in maximum detection range is a safety critical impairment that would prevent the detection and avoidance of road obstacles or other moving vehicles beyond a safety critical minimum detection range. In one example involving a vehicle-mounted LIDAR system, the signal processing unit112and the LIDAR control systems110can take corrective action to mitigate the operational effects. Such corrective action could be sending control signals that cause the vehicle to slow in order to increase reaction times, to park in a safe location, and/or to implement a window cleaning procedure such as activating a window washing, window wiper system, for example. According to some embodiments, the LIDAR systems described herein can be configured with a sampling rate for use by the time domain sampler501to sample the range-dependent baseband signal313, and the number of samples per pixel in the LIDAR window field of view. As shown by the timelines701and702ofFIG.7, the number of samples per pixel are assumed to have a fixed sampling rate. Timeline701illustrates N samples per pixel while timeline702illustrates2N samples per pixel. The choice of the number of samples per pixel poses a tradeoff between angular resolution (i.e., precision of location of obstructions), and frequency resolution (which translates to range resolution). If the number of samples per pixel is lowered, then angular resolution can be increased and range resolution can be decreased. In another scenario, if the number of samples per pixel is increased, then angular resolution can be decreased and range resolution can be increased. According to some embodiments, the time domain sampler501can increase the sampling rate when window blockage has been detected in order to better resolve the portion of the FOV that is obstructed. An increase in the sampling rate can be used to increase angular resolution without sacrificing range resolution. FIG.8is a flowchart illustrating an example method800for detecting and mitigating the operational effects of a LIDAR window blockage according to the present disclosure. Method800begins at operation802: generating a range-dependent baseband signal (e.g., baseband signal313) from an FMCW LIDAR return signal (e.g., return signal311). Method800continues at operation804: sampling the range-dependent baseband signal in the time domain (e.g., with time domain sampler501). Method800continues at operation806: transforming the time domain samples into the frequency domain (e.g., with DFT processor503). Method800continues at operation808: searching for frequency domain energy peaks at frequencies that are less than a threshold frequency (e.g., with peak search processor506). Method800continues at operation810: determining an obstructed FOV of the FMCW LIDAR system (e.g., in post-processor508). Method800continues at operation812: determining the reflected energy in the obstructed FOV (e.g., from the reflectivity map generated by post-processor508). Method800continues at operation814: determining if the obstructed FOV is a safety critical FOV (e.g., by signal processing unit112). Method800continues at operation816: determining whether a maximum detection range is less than a minimum safety critical detection range (e.g., by signal processing unit112). Method800concludes at operation818: mitigating the obstruction (e.g., by automatically washing/wiping the window or by automatically directing the host vehicle of the LIDAR system to a safe location for subsequent cleaning). FIG.9is a block diagram of a system900for detecting and mitigating the operational effects of a LIDAR window blockage on an FMCW LIDAR system according to the present disclosure. System900includes a processor901, which may be a part of signal processing unit112and/or signal processing system500. System900also includes a memory902(e.g., a non-transitory computer-readable medium, such as ROM, RAM, flash memory, etc.) containing instructions that, when executed by processing device901, cause the LIDAR system100to perform operations comprising the method for detecting and evaluating the operational effects of LIDAR window obstructions on the LIDAR system100as described with respect toFIG.8. In particular, the non-transitory computer-readable memory902includes: instructions904for generating a range-dependent baseband signal from an FMCW LIDAR return signal (e.g., return signal311); instructions906for sampling the range-dependent baseband signal in the time domain (e.g., with time domain sampler501); instructions908for transforming the time-domain samples into the frequency domain (e.g., with DFT processor503); instructions910for searching for frequency domain energy peaks at frequencies that are less than a threshold frequency (e.g., with peak search processor506); instructions912for determining an obstructed field of view (FOV) of the LIDAR system (e.g., in post-processor508); instructions914for determining a reflected energy in the obstructed FOV (e.g., from the reflectivity map generated by post-processor508); instructions916for determining whether the obstructed FOV is a safety critical FOV (e.g., by signal processing unit112); instructions918for determining whether a maximum detection range is less than a minimum safety critical detection range (e.g., by signal processing unit112); and instructions920for mitigating the obstruction (e.g., by automatically cleaning/wiping the window or by automatically directing the host vehicle of the LIDAR system to a safe location for subsequent cleaning). The preceding description sets forth numerous specific details such as examples of specific systems, components, methods, and so forth, in order to provide a thorough understanding of several examples in the present disclosure. It will be apparent to one skilled in the art, however, that at least some examples of the present disclosure may be practiced without these specific details. In other instances, well-known components or methods are not described in detail or are presented in simple block diagram form in order to avoid unnecessarily obscuring the present disclosure. Thus, the specific details set forth are merely exemplary. Particular examples may vary from these exemplary details and still be contemplated to be within the scope of the present disclosure. Any reference throughout this specification to “one example” or “an example” means that a particular feature, structure, or characteristic described in connection with the examples are included in at least one example. Therefore, the appearances of the phrase “in one example” or “in an example” in various places throughout this specification are not necessarily all referring to the same example. Although the operations of the methods herein are shown and described in a particular order, the order of the operations of each method may be altered so that certain operations may be performed in an inverse order or so that certain operation may be performed, at least in part, concurrently with other operations. Instructions or sub-operations of distinct operations may be performed in an intermittent or alternating manner. The above description of illustrated implementations of the invention, including what is described in the Abstract, is not intended to be exhaustive or to limit the invention to the precise forms disclosed. While specific implementations of, and examples for, the invention are described herein for illustrative purposes, various equivalent modifications are possible within the scope of the invention, as those skilled in the relevant art will recognize. The words “example” or “exemplary” are used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “example” or “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the words “example” or “exemplary” is intended to present concepts in a concrete fashion. As used in this application, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or”. That is, unless specified otherwise, or clear from context, “X includes A or B” is intended to mean any of the natural inclusive permutations. That is, if X includes A; X includes B; or X includes both A and B, then “X includes A or B” is satisfied under any of the foregoing instances. In addition, the articles “a” and “an” as used in this application and the appended claims should generally be construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form. Furthermore, the terms “first,” “second,” “third,” “fourth,” etc. as used herein are meant as labels to distinguish among different elements and may not necessarily have an ordinal meaning according to their numerical designation.
31,567
11860283
DETAILED DESCRIPTION The present disclosure is related to a method for detecting the spoofing of a signal from a satellite in orbit. The method as described herein is related to utilizing existing hardware of the satellite in orbit to make a determination based on information gleaned from the existing hardware as to whether a signal received at a receiver is a true satellite signal, or a spoofing signal. While this description is primarily directed towards use on an aircraft, it is also applicable to any vehicle or environment which would utilize a spoofing detecting method as a satellite navigational aid. FIG.1illustrates a satellite10in orbit around the earth12. The satellite10emits a satellite signal14. The satellite10can be any type of satellite, including but not limited to Geostationary satellites, Galileo satellites, COMPASS MEO satellites, GPS satellites, GLONASS satellites, NAVIC satellites, QZSS satellites or BeiDou-2 satellites. An aircraft16is illustrated in flight. The aircraft16can include a receiver, by way of non-limiting example a radio antenna18, for receiving the satellite signal14from the satellite10. A spoofing signal source20located on earth12can emit a spoofed satellite signal22. While illustrated as located on earth12, it is contemplated that a spoofing signal source20can be located elsewhere, including but not limited to another satellite in orbit. At least two characteristic signatures24, including a power level26and a secondary characteristic28,30, are associated with the satellite signal14. As used herein, the term “characteristic signature” is simply a term used to cover any characteristic associated with the signal received. It is not to be confused with the known term “digital signature” which can reference a cryptographic or mathematical way to verify that a document hasn't been tampered with during transit between sender and signer. It is contemplated, however, that a digital signature can be the secondary characteristic28,30of the satellite signal14in addition to the power level26. A database32can be utilized for storing current transmission data values24aassociated with the satellite signal14, which can include, but are not limited to, a current transmission power level26a. The database32can be stored on a server as part of a network connected to the antenna18. The database32can be updated continuously, depending on the specific implementation or a bandwidth constraint. Current transmission power levels26acan be captured in real time which requires a constant data stream. In some implementations, by way of non-limiting example on an aircraft, the antenna18may not be capable of receiving a constant data transmission due to location or lack of equipment. If the current transmission power levels26ado not fluctuate much over time, it is contemplated that the database32is updated hourly, daily, weekly, or even monthly. It should be understood that the characteristic signatures24received at the antenna18and the current transmission data values24astored in the database32should be approximately equal, accounting for any tolerances. The current transmission data values24acan therefore be compensated for with respect to atmospheric attenuation during transmission. The compensation can be a function of the secondary characteristic, by way of non-limiting example a corresponding satellite location28, and further a function of a current distance29between the satellite10and the antenna18. The characteristic signatures24are associated with the actual satellite signal14received at the antenna18while the current transmission data values24acan be known transmission values continuously calculated, updated, and uploaded to the database32based on real-time locations of the satellite10, or other known qualities of the satellite10. The current transmission data values24acan either be derived by having the database32fed with current transmission data values24adirectly from the satellite10. In a case where the data is not available directly from the satellite10, other locations33which are not mobile, can measure the current transmission power levels26a, by compensating for weather conditions33and distance31. The receiver18on the aircraft16can then compensate for weather33and distance31to compare a received signal35to the expected characteristic signatures24. In other words, the current transmission data values24acan be received at other locations33or calculated based on known satellite data, uploaded to the database32, and relayed to an onboard database32a. A number of spoofed characteristic signatures34can be associated with the spoofed satellite signal22. The spoofed characteristic signatures34associated with the spoofed satellite signal22can include, but are not limited to, a spoofed power level36, a spoofed location38, and a spoofed time40. FIG.2illustrates a block diagram for a method100of detecting the spoofing of the satellite signal14from the satellite10in orbit. At102an apparent satellite signal42is received at the antenna18. The apparent satellite signal42can be the satellite signal14or a spoofed satellite signal22. It is further contemplated that both signals14,22are received simultaneously. The apparent satellite signal42can carry the at least two characteristic signatures24,34having two of several characteristics, including but not limited to the power levels26,36, either the power level26associated with the satellite signal14, or the power level36associated with the spoofed satellite signal22. It is further contemplated that other data characteristics associated with the apparent satellite signal42can also be part of the characteristic signatures24,34. The secondary characteristic can include, but is not limited to the satellite location28and a satellite time30. A spoofed location38and a spoofed time40are also characteristics that can be part of the characteristic signature34. Upon receiving the apparent satellite signal42, at104, the power level26,36and any additional secondary characteristic28,30,38,40of the apparent satellite signal42are determined, by way of non-limiting example with a computer45. The computer45can be an integrated with the receiving antenna18or calculations could be offloaded to a separate computer in the avionics bay in the aircraft or any other location suitable. The computer does not need a fixed location or integration with other systems. The database32can be an onboard database32aused to store the current transmission data values24a. The current transmission data values24aincludes the current transmission power level26a. The current transmission power level26acan be a known value downloaded prior to departure, by way of non-limiting example before an aircraft takes off while at the gate, and further calculated while in flight based on a predetermined flight path. The current transmission power level26acan also be uploaded from another receiving source as previously described, through an encrypted safe path. The current transmission value24acan therefore be a function of the current transmission power level26. The current transmission data values24acan further include a real-time satellite location28a. Utilizing small perturbation theory, updated satellite locations can be calculated to estimate predicted real-time locations of the satellite10in orbit and stored as the real-time satellite location28a. It is contemplated that the computer45can be utilized to execute the small perturbation theory. The method can further include calculating a second difference value48when the current transmission data value24ais based on the predicted or actual real-time satellite locations28a. Another mathematical calculation contemplated includes dead reckoning where with the assumption that a satellite moves at constant speed in a constant orbit, a prediction is calculated. This would then require frequent updates of actual positions. It should be understood that the real-time satellite locations28acan be determined in a number of ways and are not limited to those described herein. The current transmission data values24acan further include a Global Navigation Satellite System (GNSS) time signal30a. It is further contemplated that the determining at least two characteristic signatures24,34includes receiving a GNSS time signal as the GNSS time signal30afrom the database32, directly or indirectly via the onboard database32a. An onboard clock50can be set prior to departure, again by way of non-limiting example before an aircraft takes off while at the gate. The onboard clock50can be set to a known real-time. The onboard clock50can be, by way of non-limiting example, an atomic clock, and be utilized to determine the GNSS time signal30a. By way of non-limiting example, the method can further include calculating a third difference value52when the current transmission data value24ais based on time from the onboard clock50. The computer45, can be utilized to compare at106at least one of the at least two characteristic signatures24,34to the current transmission data values24ato define a difference value44. The difference value44may be numerical, binary, or text. The difference value44can be compared to the predetermined tolerance value46. The method can further include calculating the difference value44by retrieving current transmission data values24abased on current transmission power levels26afrom the database32, directly or indirectly via the onboard database32a. In the event that the difference value44is within the predetermined tolerance value46, no indication is necessary to be sent to a user interface50. However, it is not outside the realm of possibilities that signals received that are within the predetermined tolerance value46can be labeled as safe or true signals. Indicating at108that the apparent satellite signal42is a spoofed satellite signal22occurs when the difference value44is outside the predetermined tolerance value46. An indication signal54that the apparent satellite signal42is a spoofed satellite signal22can be generated and delivered to an appropriate user interface56. The indication signal54can be generated, by way of non-limiting example, by the computer45. By way of non-limiting example, a user reading the user interface56can include a pilot or co-pilot of the aircraft or an air traffic controller or both. Any appropriate user or user interface can receive the indication signal54. In an exemplary detecting, the satellite signal14can have an output power level23of 100 W (20 dBW) which can translate to a received characteristic signature24power level26of 0.0001 pW (−160 dBW) based on free space path loss calculated orbit elevation and aircraft elevation. In the exemplary detecting, the current transmission power level26acan also be 0.0001 pW (−160 dBW), which can result in a difference value44of zero. A predetermined tolerance value46can be +/−0.001 pW. If the apparent satellite signal42received at the antenna18includes a spoofed characteristic signature34with a spoofed power level36received of 0.1 pW(−130 dBW), the difference value44would be ˜0.01 pW which is outside the predetermined tolerance value46of +/−0.001 W. A method100of detecting the spoofing of the satellite signal14from the satellite10in orbit is illustrated in a flow chart ofFIG.2. The method includes at102receiving by the antenna18the apparent satellite signal42. At104determining at least two characteristic signatures24,34of the apparent satellite signal42including a power level26,36. At106comparing the at least two characteristic signatures24,34to the current transmission data values24ato define a difference value44. At108indicating the apparent satellite signal42is a spoofed satellite signal22when the difference value44is outside a predetermined tolerance value46. It is further contemplated that the secondary characteristic28,30as described herein can include waveform generation, which can be a function of the hardware used on the satellite. The spoofing signal source20would include different hardware or software than the satellite10to create the spoofed satellite signal22, which would present as a difference value44of small deviations in waveforms (e.g. more/less perfect square wave). It is also contemplated that the secondary characteristic28,30as described herein includes determining the bandwidth utilized by the satellite10and/or receiver18and how much noise is generated. The difference value44would take into account tolerances that include bleed over frequencies. Utilizing noise in conjunction with power levels26enable a long time monitoring of the fluctuation in both noise associated with differing bandwidths and the power level26emitted over time. It should be understood that the at least two characteristics as described herein include a power level and any one of the secondary characteristics as described herein. It is also contemplated that the at least two characteristics as described herein can include three or more characteristics. Benefits associated with the method of detecting the spoofing of the satellite signal described herein enable a pilot to be alerted of possible spoofing. Allowing pilots access to information regarding possible spoofing increases safety and security for the aircraft along with passengers on board. Furthermore a reduction of missed approaches during landing procedures can be a result of an informed pilot. Informed communication with air traffic control in a case of a spoofing attack more quickly enables both pilots and air traffic control workers to communicate with each other and identify and fix any errors in navigation that may occur due to an attempted spoofing attack. Furthermore, the method as disclosed herein can be implemented and carried out with existing parts on any aircraft, satellite, or structures provided on earth. The cost of implementing the method is therefore less than replacing the existing GNSS infrastructure with cryptographically signed transmissions where each signal is implanted with a digital signature. Proper cryptographic authentication of signals requires hardware and software changes globally. Modifying existing satellites in orbit is difficult. The disclosure herein enables an update for receivers that is “backwards compatible” when improvement in spoofing detection is necessary. To the extent not already described, the different features and structures of the various embodiments can be used in combination with each other as desired. That one feature is not illustrated in all of the embodiments is not meant to be construed that it cannot be, but is done for brevity of description. Thus, the various features of the different embodiments can be mixed and matched as desired to form new embodiments, whether or not the new embodiments are expressly described. All combinations or permutations of features described herein are covered by this disclosure. This written description uses examples to describe aspects of the disclosure described herein, including the best mode, and also to enable any person skilled in the art to practice aspects of the disclosure, including making and using any devices or systems and performing any incorporated methods. The patentable scope of aspects of the disclosure is defined by the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal languages of the claims. Further aspects of the invention are provided by the subject matter of the following clauses: 1. A method for detecting the spoofing of a signal from a satellite in orbit, the method comprising receiving by a receiver an apparent satellite signal; determining at least two characteristic signatures of the signal including a power level and a secondary characteristic; comparing at least one of the at least two characteristic signatures to at least one current transmission data value to define a difference value; and indicating the apparent satellite signal is a spoofed satellite signal when the difference value is outside a predetermined tolerance value. 2. The method of any preceding clause further comprising calculating the difference value by retrieving the at least one current transmission data value based on current transmission power levels of satellites from a database. 3. The method of any preceding clause wherein the determining a secondary characteristic further comprises receiving a real-time location of a satellite. 4. The method of any preceding clause further comprising calculating a second difference value when the at least one current transmission data value is based on a real-time satellite location. 5. The method of any preceding clause wherein the real-time location is one of an actual real-time location or a predicted real-time location. 6. The method of any preceding clause wherein the determining a secondary characteristic further comprises receiving a GNSS time signal. 7. The method of any preceding clause further comprising calculating third difference value when the at least one current transmission data value is based on an time from an onboard clock. 8. The method of any preceding clause further comprising a database comprising a table of current transmission power and a corresponding current real-time location for the satellite. 9. The method of any preceding clause wherein the at least one current transmission data value is a function of the current transmission power level. 10. The method of any preceding clause wherein the at least one current transmission data value is compensated for atmospheric attenuation. 11. The method of any preceding clause wherein the compensation is a function of a corresponding current satellite location. 12. The method of any preceding clause wherein the compensation is a function of a current distance between the satellite and the receiver. 13. The method of any preceding clause further comprising generating an indication signal and delivering the indication signal to a user interface. 14. The method of any preceding clause wherein the determining a secondary characteristic can include determining a waveform or bandwidth. 15. A method for detecting the spoofing of a signal from a satellite in orbit to a receiver on an aircraft, the method comprising receiving by the receiver an apparent satellite signal; determining at least two characteristic signatures of the signal including a power level; comparing the at least one of the two characteristic signatures to a current transmission data value to define a difference value; and indicating the apparent satellite signal is a spoofed satellite signal when the difference value is outside a predetermined tolerance value. 16. The method of any preceding clause wherein the determining at least one reference value further comprises receiving a real-time location of a satellite. 17. The method of any preceding clause further comprising calculating a second difference value when the at least one current transmission data value is based on a real-time satellite location. 18. The method of any preceding clause wherein the determining at least two characteristic signatures value further comprises receiving a GNSS time signal. 19. The method of any preceding clause further comprising calculating a third difference value when the at least one current transmission data value is based on time from an onboard clock. 20. The method of any preceding clause further comprising generating and delivering an indication signal to a user interface.
19,791
11860284
It will be noted that throughout the appended drawings, like features are identified by like reference numerals. DETAILED DESCRIPTION Various aspects of the present disclosure generally address one or more of the problems of using a dedicated GNSS receiver in a user-friendly manner. The present description provides a handle, or handheld receptacle, to be used in combination with a GNSS receiver, wherein the GNSS receiver requires a battery to be operable. Typically, the battery is housed in the GNSS receiver; however, according to the present disclosure, the handheld receptacle can advantageously be inserted in a battery housing of the GNSS receiver to secure them together, and the battery is instead displaced in the handheld receptacle. Indeed, the battery is mechanically swappable between inside the handheld receptacle and inside the GNSS receiver depending on whether the handheld receptacle is combined or not to the GNSS receiver. The handheld receptacle is insertable into the battery housing of the GNSS receiver. This insertability is made possible by shaping a portion of the handheld receptacle to be complementary with the inside of the battery housing of the GNSS receiver; in other words, said portion of the handheld receptacle is shaped in part as the battery which is otherwise housed in the GNSS receiver. This permits the use of the handle to be secured to the GNSS receiver in a convenient and solid manner when they are used in combination. An electric connection can also be provided such that the battery, originally housed in the GNSS receiver and displaced into a battery housing of the handheld receptacle instead, can provide electrical power to the GNSS receiver through the handheld receptacle via an electrical connection of the handheld receptacle in the battery housing of the GNSS receiver to keep powering the GNSS receiver. A global navigation satellite system (GNSS receiver) according to the present disclosure, encompasses, without limitation, any one of or more than one of: Global Positioning System (GPS), Galileo, GLONASS, China's BeiDou Navigation Satellite System, QZSS, IRNSS, or any other system which is considered as a fully operational global navigation satellite system. Referring now to the drawings,FIGS.1A-1Gillustrate a GNSS receiver100in accordance with the embodiments of the present disclosure. According to an embodiment, the GNSS receiver100comprises a housing101(or body) and an antenna102. The GNSS receiver100also comprises electronic components, such as a micro-controller105and suitable transceivers, for example and without limitation, a Bluetooth™ transceiver108for sending and receiving data from nearby devices, including electronic devices110such as a smartphone, a tablet, a laptop computer, and the like. According to an embodiment, and as shown inFIGS.1A-1GandFIGS.2A-2G, the GNSS receiver100comprises all components in the housing101, including the antenna102and the electronic components104(such as the micro-controller105, Bluetooth™ transceiver108, etc.). According to the embodiment as shown inFIGS.1A-1G, the antenna is a single-band antenna. According to the embodiment as shown inFIGS.2A-2G, the antenna is a multi-band antenna, which is thicker than the single-band antenna and therefore requires a higher housing101to accommodate this thicker antenna. According to another embodiment, and as shown inFIGS.3A-3C, the antenna102can be separate and distinct from the remaining parts of the GNSS receiver100inside of being housed therein. According to this embodiment, the antenna102is therefore releasably securable onto the housing101. Accordingly, in this embodiment, the housing101and/or the antenna102comprise an attachment109for mechanically coupling them together, such as a screw on one element and a corresponding thread on the other element for screwing them together. Other attachments are possible, such as, without limitation, a snap connector, a pin and a bore, a clip, holding arms, corresponding protrusion and recess which can lock together, a pair of magnets or electromagnets, an adhesive or sticker, a zipper, buttons, etc. This embodiment is useful for cases in which the antenna102needs to be interchanged or needs to be attached to something else, such as a backpack or other pieces of equipment. For example, the attachment109may a screw-type of protrusion which is provided in the housing101, and the antenna102has a bottom inner threaded bore with corresponding diameter and pitch to match with the screw-type of protrusion and be screwed onto the housing101as shown inFIGS.3A-3C. Moreover, the housing101may comprise all ports and other types of connectors which are necessary or advantageous to have on the external surface thereof to be able to input and output data therethrough, and they should be in connection with the appropriate electronic components104therein. According to an embodiment, padding such as padding made of a resilient material (silicone and the like) can be used to mechanically protect the external surface of the GNSS receiver100. According to an embodiment, and as shown inFIGS.4A-4Bthe GNSS receiver100can be installed on a pole-mount adapter140(or bracket), which cooperates and mechanically couples with an external portion of the housing101of the GNSS receiver100. For example, and without limitation, it can comprise a rail141of the pole-mount adapter140which engages with a corresponding pair of linear recesses144on the sides of the housing101of the GNSS receiver100, and can include additional features such as a spring-biased pin or a similar element which engages with a corresponding bore on the opposite part to lock the housing101and the pole-mount adapter140together. According to an embodiment, the pole-mount adapter140can be used in combination with other elements for the purpose of mounting the GNSS receiver100on such an element, for example a pole150, using a pin or screw of the pole150inserted into a bore of the pole-mount adapter140. As shown inFIGS.5A-5B, the pole150can be adapted to cooperate and lock with the pole-mount adapter140, thereby mounting and securing the GNSS receiver100onto the pole150using the pole-mount adapter140in-between to secure them together in a pole-mounted configuration. Also, the upper portion of the pole inFIGS.5A-5Bmay in fact belong to the pole-mount adapter140shown inFIGS.4A-4B. According to an embodiment of the disclosure, and referring toFIGS.6A-6B, there is provided, in the housing101, a battery receptacle160for receiving a battery165(namely the GNSS receiver battery receptacle160). For example, the battery receptacle160and the battery165may comprise corresponding elements which cooperate or interlock together such that the battery165may fit (e.g., slide into) and be retained to remain in the battery receptacle160. For example, and without limitation, it can comprise a rail167of the sides of the housing101of the GNSS receiver100which engages with a corresponding pair of linear recesses168on the battery165, or vice versa (the rail and linear recesses can be provided on opposite elements as long as all is consistent in the complete system), and can include additional features such as a spring-biased pin or retaining pin111or a similar element which engages with a corresponding bore on the opposite part to lock the battery receptacle160of the housing101and the battery165together, or which may simply align a corresponding part112of said retaining pin111with a corresponding hooking portion161of the battery160, as shown inFIGS.6A-6B. Now referring toFIGS.7A-7D, there is shown a handle, or handheld receptacle700, adapted for use with a GNSS receiver100, in accordance with an embodiment of the disclosure. The handheld receptacle700should comprise a handling portion701which is shaped to be grabbed and manipulated by a hand, for example. According to an embodiment, and as shown inFIGS.8A-8B, the handheld receptacle comprises an insertion portion710which is adapted to be inserted into the battery receptacle160of the housing101, when no battery165is housed in the battery receptacle160. For example, and without limitation, the distal portion of the handheld receptacle700forms the insertion portion710having an external shape similar to the external shape of the battery165(i.e., it is shaped as an outer portion thereof) which is normally inserted into the battery receptacle160of the housing101. This ensures that the battery receptacle160of the housing101can receive therein either the battery165or the insertion portion710of the handheld receptacle700, interchangeably. A hooking portion711can be provided to be hooked by the retaining pin111of the GNSS receiver when the insertion portion710of the handheld receptacle700is inserted and housed therein. Accordingly, just like with the battery165, the insertion portion710of the handheld receptacle700may comprise corresponding elements which cooperate or interlock with the battery receptacle160such that the insertion portion710may fit (e.g., slide into) and be retained to remain in the battery receptacle160. For example, and without limitation, it can comprise a rail717of the insertion portion710which engages with a corresponding pair of linear recesses168on the sides of the housing101of the GNSS receiver100, or vice versa, and can include additional features such as a spring-biased pin or a similar element which engages with a corresponding bore on the opposite part to lock the battery receptacle160of the housing101and the battery165together. In this case, since the battery165is removed from the GNSS receiver100, the handheld receptacle700should also comprise a battery, as well as the electric circuit which feeds the electrical power from said battery from within the handheld receptacle700to the appropriate connectors in the battery receptacle160of the GNSS receiver100. Advantageously, and as shown inFIG.8B, it would be the same battery165, removed from the GNSS receiver100and then reinserted into the other distal end, e.g., the bottom, of the handheld receptacle700, in a battery receptacle760of the handheld receptacle700(namely the handle battery receptacle) similar to the battery receptacle160of the GNSS receiver100. Accordingly, just like with the battery receptacle160, the battery receptacle760(or battery housing) of the handheld receptacle700and the battery165may comprise corresponding elements which cooperate or interlock together such that the battery165may fit (e.g., slide into) and be retained to remain in the battery receptacle760of the handle160. For example, and without limitation, it can comprise a pair of linear recesses168of the battery165which engages with a corresponding rail767on the sides of the battery receptacle760, or vice versa, and can include additional features such as a retaining pin761or a similar element which engages with a corresponding bore on the opposite part to lock the battery receptacle760of the handheld receptacle700and the battery165(with its hooking portion162) together. Formally, this can be described as follows. According to an embodiment of the disclosure, the GNSS receiver battery receptacle160comprises a first rail and the insertion portion710of the handheld receptacle700comprises a first linear recess corresponding to the first rail for slidably receiving the insertion portion710of the handheld receptacle700in the GNSS receiver battery receptacle160. The handheld receptacle battery receptacle760comprises a second rail and the battery165comprises a second linear recess corresponding to the second rail for slidably receiving the battery165in the handheld receptacle battery receptacle760. The second rail of the handheld receptacle battery receptacle is identical to the first rail of the GNSS receiver battery receptacle, and the second linear recess of the battery165corresponds to both the first rail and the second rail for alternately slidably receiving the battery165in the handheld receptacle battery receptacle760or in the GNSS receiver battery receptacle160. It follows that the first and second linear recesses should also be identical. Alternatively, that is according to an alternative embodiment of the disclosure the GNSS receiver battery receptacle160comprises a first linear recess and the insertion portion710of the handheld receptacle700comprises a first rail corresponding to the first linear recess for slidably receiving the insertion portion710of the handheld receptacle700in the GNSS receiver battery receptacle160. The handheld receptacle battery receptacle760comprises a second linear recess and the battery165comprises a second rail corresponding to the second linear recess for slidably receiving the battery165in the handheld receptacle battery receptacle760. The second linear recess of the handheld receptacle battery receptacle760is identical to the first linear recess of the GNSS receiver battery receptacle160, and the second rail of the battery165corresponds to both the first linear recess and the second linear recess for alternately slidably receiving the battery165in the handheld receptacle battery receptacle760or in the GNSS receiver battery receptacle160. It follows that the first and second rails should also be identical. In order to be able to provide power to the GNSS receiver100uninterruptedly during the removal of the battery165, insertion of the insertion portion710of the handle700into the battery receptacle of the GNSS receiver100, and reinsertion of the battery165(or another similar battery) into the battery receptacle760of the handle160, there may be provided a temporary battery, or condenser or other type of energy storage to provide for a hot swapping of the battery165during the active operation time of the GNSS receiver100without losing power completely, to ensure continuity of operation during the short period of time (e.g., a few seconds) to perform the swapping of the battery from one housing to another (160,760). As shown inFIGS.9A-9D, the combined GNSS receiver100and handheld receptacle700provide for an apparatus which is easy to handle and which comprises the active GNSS receiver100. The battery165is being displaced in the battery receptable760of the handheld receptacle700instead of the battery receptacle160of the GNSS receiver100. According to an embodiment of the disclosure, and as shown inFIGS.10A-10B, the assembled or combined apparatus can be installed onto the pole150using the pole-mount adapter140, which is still attachable to the GNSS receiver100while the insertion portion710is inserted into the battery receptable160of the housing101of the GNSS receiver100. According to an embodiment of the disclosure, and as shown inFIGS.11A-11B, there is also provided a socket750which is a receptacle that receives and holds in place an external electronic device110, such as a smartphone, a tablet computer, or other type of portable computer, etc. As shown in the exploded view ofFIG.12, the handheld receptacle700can be made, for example, of two cooperating portions which are assembled together, for example by screwing (assembly screws are shown). According to an embodiment, the socket750can be detachable and releasably secured to the surface of the handheld receptacle700, for example by screwing or pinning it in place. The socket750for mounting and securing the external electronic device110(smartphone, tablet, etc.) is located on the handheld receptacle700, and according to an embodiment, it is located between the insertion portion710and the battery receptable760. The retaining pin761which can be inserted, for example from the side of the handheld receptacle700, to retain a corresponding portion of the battery165such that when the battery is inserted, the retaining pin761can be inserted afterward to retain the battery165in place and clip or lock with a corresponding hooking portion161on the battery165. As mentioned above, a similar mechanism could be used for the GNSS receiver100when the battery165is housed therein. According to an embodiment, the GNSS device100as described herein is configured to determine a geolocation, or GNSS location, by being in communication with a GNSS satellite. The GNSS receiver100may record the geolocation over time along with other data (such as metadata or precision data). The GNSS receiver100may also communicate the data to a computer such as the computing device110which, without limitation, can be installed on the socket750. The computer devices110may be a computer, laptop, iPad or a tablet, or any other device that has a Bluetooth transceiver, an input/output periphery, a display, a computer device processor and a memory. The computer device110is configured to receive the data in Bluetooth protocol from the GNSS receiver100, store it in the computer memory and retrieve it for display, for transfer, or for a user query when requested. The system described herein can be used advantageously to add a handheld receptacle700to a GNSS receiver100which is made to be used without any handle if no handle is present, or if not useful or better without any, providing greater versatility and adaptability. Also, the ability to change the location of the battery enable a much deeper and more solid insertion of the handle inside the GNSS receiver100, and the inclusion of the battery165at the other end of the handheld receptacle700gives a weight distribution which is more agreeable when the combined apparatus is manipulated for operation. Also, the hot-swap capability of the battery ensures convenience when switching from one configuration (GNSS receiver in standalone configuration) to the other configuration (combined configuration of the GNSS receiver100and handle inserted therein). Regarding the weight distribution, including the battery165at the other end of the handheld receptacle700changes the center of mass of the whole apparatus by displacing it away from an eventual GNSS receiver100to be installed thereon. This implies that when the GNSS receiver100is installed, having the relatively heavy battery relocated in the end of the handheld receptacle700away from the GNSS receiver100(also relatively heavy) makes a more balanced apparatus, i.e., the battery's weight compensates (at least partly) the weight of the GNSS receiver100on either side of the handle and the user's end, thus making the apparatus balanced on either side of the handle portion of the handheld apparatus and ensuring that the total (net) moment of force relative to a central portion of the handle portion is reduced. This puts less stress on the user's wrist who does not need to force as much to maintain the apparatus balanced thanks to the battery's weight pulling down on the other side of the handle portion. This can be compared to a prior-art setting in which the user would need to force at the wrist to compensate for the weight of the GNSS receiver100, with the battery being on the same side as the electronic device and thereby not compensating and rather increasing the moment of force exerted onto the user's wrist. The socket750located approximately at a center of the handheld receptacle700and which receives a weight of the external electronic device110also contributes to maintain the overall weight relatively centered close to the hand of the user to avoid unbalance restively to said center and thereby reduce wrist fatigue. Finally, this configuration can also be used to switch to a pole-mounted configuration easily, and this switch can be done with either the battery of the handle being housed in the battery receptacle160of the GNSS receiver100, because they have the same shape and can both accommodate the pole-mount adapter140for easy connection onto the pole150. Using the handle700with its socket750is also very useful for easy installation of an associated computing device110which can be used for input and output of data by being in communication (wired, Bluetooth™, etc.) with the GNSS receiver100. While preferred embodiments have been described above and illustrated in the accompanying drawings, it will be evident to those skilled in the art that modifications may be made without departing from this disclosure. Such modifications are considered as possible variants comprised in the scope of the disclosure.
20,306
11860285
In all the figures, similar elements bear identical reference numerals. DETAILED DESCRIPTION OF THE INVENTION 1/ Fleet of Vehicles Referring toFIG.1, a fleet of vehicles comprises a main vehicle1and at least one secondary vehicle2movable relative to the main vehicle1. Each vehicle1,2can be of any type: land vehicle, ship, aircraft, etc. Objects external to the fleet of vehicles are also represented inFIG.1: daymarks (lampposts and Eiffel tower), as well as the Earth itself. As will be seen below, these objects are reference points for the navigation of the fleet. In the following, different frames are considered: a main frame attached to the main vehicle1, a secondary frame attached to the secondary vehicle2, and a reference frame. The reference frame is for example a celestial frame attached to stars or to the Earth. Referring toFIG.2, the main vehicle1comprises a data receiving interface. The receiving interface comprises at least one first sensor3onboard the main vehicle1. The at least one first sensor3is configured to acquire kinematic data from the secondary vehicle2in the main frame. The at least one first sensor3comprises for example a lidar, a camera or an odometer. In the present disclosure, it is considered that the expression “kinematic data” covers in particular positions, velocities or accelerations. The receiving interface also comprises a communication interface4adapted to communicate with the secondary vehicle2, in particular receive from the latter kinematic data from the main vehicle1in the frame attached to the secondary vehicle2. The communication interface4is of the wireless radio type, and comprises for example an antenna. The main vehicle1furthermore comprises at least one proprioceptive sensor6. The proprioceptive sensor6comprises for example an inertial unit. The inertial unit comprises a plurality of inertial sensors such as gyrometers and accelerometers. As a variant, the proprioceptive sensor6comprises at least one odometer. The main vehicle1further comprises a receiver8configured to acquire relative kinematic data between the main vehicle1and a third object separate from the main vehicle1and from the secondary vehicle2. The receiver is for example a GPS/GNSS receiver, in which case the third object is the Earth or one of the stars to which the celestial frame is attached. As a variant or in addition, this receiver comprises a lidar, a camera, in which case the third object can be a daymark located in the vicinity of the main vehicle1. In another variant, the receiver comprises an odometer which has a relative velocity of the carrier relative to the Earth. The main vehicle1furthermore comprises a data processing unit10. The processing unit10is arranged to process data received by the receiving interface (therefore received by the first sensor3or received by the communication interface4), by the inertial unit6or by the receiver8. The data processing unit10typically comprises at least one processor configured to implement a navigation assistance method which will be described below, by means of an invariant-type Kalman filter. The invariant Kalman filter is typically in the form of a computer program executable by the processor of the data processing unit. The general operation of an invariant Kalman filter is known per se. However, it will be seen below that the binary operation used to configure the invariant Kalman filter implemented by processing unit10is chosen in a particular manner, so as to adapt to the context of the assistance in the navigation of the fleet of vehicles comprising the vehicles1and2. Preferably, the processing unit10comprises at least two processors, so as to implement two Kalman filters in parallel. It will be seen below that these two Kalman filters do not use exactly the same input data. Furthermore, the secondary vehicle2comprises at least one second sensor12and a communication interface14for transmitting data acquired by the at least one second sensor12to the communication interface4of the main vehicle1. The second sensor12is configured to acquire movement data from the main vehicle1in the frame attached to the secondary vehicle2. The at least one second sensor12comprises for example a lidar, a camera or an odometer. The secondary vehicle2comprises means for providing proprioceptive movement data from the secondary vehicle. These providing means comprise for example at least one proprioceptive sensor16. The proprioceptive sensor is for example one or more of the types of sensors envisaged for the proprioceptive sensor6. As a variant, these providing means comprise a memory storing an a priori model of evolution of the secondary vehicle2. This memory can also be integrated into the secondary vehicle2as well as into the main vehicle1. 2/ Configuration of the Invariant Kalman Filter The invariant Kalman filter implemented by the processing unit10is configured to estimate a navigation state of the fleet comprising the main vehicle1and the secondary vehicle2. The navigation state comprises first variables representative of a first rigid transformation linking the main frame (attached to the main vehicle1) to the reference frame, and second variables representative of a second rigid transformation linking the secondary frame (attached to the secondary vehicle2) to the main frame. The first rigid transformation allows for example switching from the frame linked to the main vehicle1to the reference frame, and the second one allows switching from the frame linked to the main vehicle1to the frame linked to the secondary vehicle2. In a well-known manner, a rigid transformation (also known as affine isometry), is a transformation that preserves the distances between pairs of points of a solid. Thus, each of the first and second rigid transformations can be characterized by the composition of a rotation and a translation. In the following, an embodiment will be detailed in which the navigation state, denoted X, comprises the following elements: X=(Rp,xp,Rsp,xsp) where Rp, xp, Rsp, xspare defined as follows:Rpand xpare respectively a rotation matrix and a vector of dimension3, representing the attitude and the position of the main vehicle: a vector u written in the frame of the main vehicle1becomes the vector Rpu+xpin the fixed frame.Rspand xspare respectively a rotation matrix and a dimension vector3, representing the attitude and the relative position of the main vehicle relative to the secondary vehicle: a vector u written in the frame of the main vehicle1becomes the vector Rspu+xspin the frame of the secondary vehicle. In this particular embodiment, the first variables are Rp, xpand the second variables are Rsp, xsp. The expression “the object X′ is of the same nature as the state vector”, used below, will mean that X′ is a succession of matrices and vectors similar to X. The number 3×(r+v)=12 will also be called “dimension of state X” where r is the number of rotation matrices appearing in X and v the number of vectors appearing in X. In other embodiments, this number can be different. Furthermore, the invariant Kalman filter is further configured to use as observations relative kinematic data between the main vehicle1and the secondary vehicle2received by the receiving interface, coming from the first sensor3of the main vehicle1. The observation here will be the relative position of the secondary vehicle expressed in the frame of the main vehicle: Y=−RspTxsp The invariant Kalman filter is configured to use as binary operation an operation comprising a term-by-term composition of the first rigid transformation and of the second rigid transformation. This binary operation, denoted *, applies the following transformation to two objects (Rp, xp, Rsp, xsp) and (R′p, x′p, R′sp, x′sp) in one embodiment: (Rp,xp,Rsp,xsp)*(R′p,x′p,R′sp,x′sp)=(RpR′p,x′p+Rpx′p,RspR′sp,xsp+Rspxsp) 3/ Method for Assisting the Navigation of the Fleet Referring toFIG.3, a method100for assisting the navigation of the fleet according to a first embodiment, and implementing an invariant Kalman filter configured as indicated in section 2/, comprises the following steps. It is assumed that an estimation {circumflex over (X)}1of the navigation state of the fleet has been estimated by the invariant Kalman filter. In an acquisition step102, the first sensor3acquires a first group of movement data Y1from objects external the main vehicle1in the main frame. These data can comprise:position data of the secondary vehicle2in the main frame (the corresponding external object is then the secondary vehicle2)position data of at least one daymark in the main frame (the corresponding external object is then this daymark). The daymark is at a known position in the reference frame. These data Y1are transmitted to the processing unit10. In a step104, the processing unit10calculates the difference between the observed measurements Y1and the expected measurements (this difference, denoted Z1, is called innovation in the literature dealing with Kalman filters). The expected measurements are deduced from the state X1previously estimated by the invariant Kalman filter. In a correction step106, the data processing unit10multiplies the innovation Z1by a matrix K1called “gain” matrix, which expresses Z1in a linear correction dx1=K1Z1to be applied to the state of the system. The choice of the gains is a classic question common to most estimation methods (see below). In a retraction step108, the processing unit10transforms the linear correction dx1into a non-linear correction C1of the same nature as {circumflex over (X)}1(the state {circumflex over (X)}1is not a vector because it contains rotations). The transformation used is any function taking as argument a vector of the dimension of the state X (12in this embodiment) and returning an object of the same nature as X, but a particularly efficient choice is the term-by-term exponential of the Lie group of the pairs of rigid transformations. A non-linear update step110is then implemented by the processing unit10. In this step110, the processing unit10combines the estimation X1of the state of the system with the non-linear correction C1to build a corrected estimation: {circumflex over (X)}1+=C1*X1 Where the symbol * is the binary operation defined above. The gain matrix K1is chosen so as to stabilize the non-linear estimation error e defined by: e={circumflex over (X)}1*X−1 Where the symbol.−1is the usual inversion associated with the operation *: (Rp,xp,Rsp,xsp)−1=(RpT,−RpTxp,RspT,−RspTssp) The error can also be explicitly written in the following manner: e=({circumflex over (R)}pRpT,{circumflex over (x)}p−{circumflex over (R)}pRpTxp,{circumflex over (R)}pRpT,xp−{circumflex over (R)}pRpTxp) This error is used to build the linearized system according to the usual procedure of the invariant filtering, from which the matrix K1is deduced, for example by integrating a Riccati equation. In a propagation step112, known per se to those skilled in the art, the processing unit10generates a propagated navigation state, from the state X1+. To do so, the processing unit10applies the evolution model which can be, for example, an odometry, an a priori model or a conventional integration of inertial measurements acquired by the proprioceptive sensors6,16included in the vehicles1,2. The steps described above form an iteration of the invariant Kalman filter. Thanks to the invariant Kalman filter, a property that would also be obtained in a linear case, is obtained: the evolution of the estimation error is autonomous (it depends neither on X nor on {circumflex over (X)}1). The processing unit10repeats these same steps102,104,106,108,110,112in new iterations of the invariant Kalman filter. The state estimated during the propagation step112of a given iteration is used as input data for the innovation calculation104and non-linear update110steps of a next iteration. Ultimately, thanks to the method100, the main vehicle1can obtain assistance not only on its own navigation, but also on the navigation of the secondary vehicle2, based on the different data measured by the first sensor3and the proprioceptive sensors6,16. A method200for assisting the navigation of the fleet according to a second embodiment, and implementing an invariant Kalman filter also configured as indicated above, is also shown in the right part ofFIG.3; this method200comprises the following steps. In an acquisition step202, movement data Y2from the main vehicle1in at least one frame attached to an object external to the main vehicle are acquired. The data Y2can comprise:kinematic data from the main vehicle1in the secondary frame acquired by the second sensor12, for example position data of the main vehicle1in the secondary frame (in which case the corresponding external object is the secondary vehicle)data acquired by the receiver8(the corresponding object can then be considered to be the Earth, since these data allow geolocating the main vehicle relative to the Earth). The data Y2are transmitted to the main vehicle1, where appropriate via the communication interfaces14and4when they come from the secondary vehicle2. The data Y2are transmitted to the processing unit10. In a step204similar to step104, the processing unit10calculates the difference (innovation Z2) between the observed measurements Y2and expected measurements. The expected measurements are deduced from a state previously estimated by the invariant Kalman filter, denoted X2. In a correction step206, the processing unit10multiplies the innovation Z2by a gain matrix K2which expresses Z2in a linear correction dx2=K2Z2to be applied to the state of the system. This correction step206is similar to step106, with the difference that the gain matrix K2is chosen so as to stabilize a second non-linear error variable e defined by: e=X−1*{circumflex over (X)}2 In a retraction step208identical to step108, the processing unit10transforms the linear correction dx2into a non-linear correction C2of the same nature as X2(the state X2is not a vector because it contains rotations). A non-linear update step210similar to step110is then implemented by the processing unit10. In this step210, the processing unit10combines the estimation X2of the state of the system with the non-linear correction C2to build a corrected estimation in the following manner: {circumflex over (X)}2+={circumflex over (X)}2*C2 In a propagation step212identical to step112, the processing unit10generates a propagated state from the state {circumflex over (X)}2+. The steps described above form an iteration of the invariant Kalman filter. The processing unit10repeats these same steps202,204,206,208,210,212in new iterations of the invariant Kalman filter. The state estimated during the propagation step212of a given iteration is used as input data for the innovation calculation204and non-linear update210steps of a next iteration. As in the method100according to the first embodiment, the dependence of the evolution of the error relative to the state of the system is reduced. Either of the methods100,200described above can be implemented by the main vehicle1. The fundamental difference between the method100according to the first embodiment and the method200according to the second embodiment lies in the relative kinematic data between the main vehicle1and the secondary vehicle2used as an observation by the invariant Kalman filter: in the case of the method100, these data are expressed in the frame attached to the main vehicle1, while in the case of the method200, these data are expressed in an external frame. The processing unit10of the main vehicle1advantageously implements a method according to a third embodiment, combining a implementation in parallel of the two preceding methods100and200. The first method100leading to obtaining the data {circumflex over (X)}1+is for example implemented by a first processor of the processing unit10, while the second method200leading to obtaining the data {circumflex over (X)}2+is implemented by a second processor of the processing unit. In other words, two invariant Kalman filters are implemented in parallel by these two processors. In a merging step302, the processing unit10merges the data {circumflex over (X)}1+and {circumflex over (X)}2+in order to obtain an optimized estimation of the navigation state of the fleet, denoted {circumflex over (X)}opt+. For example, {circumflex over (X)}opt+is the average of {circumflex over (X)}2+and {circumflex over (X)}2+. As the states {circumflex over (X)}1+and {circumflex over (X)}2+are not vectors, their classic average is replaced with any average definition adapted to manifolds. Those skilled in the art may find generalized average definitions for manifolds in the document Markley, F. L., Cheng, Y., Crassidis, J. L., & Oshman, Y. (2007). Averaging quaternions. Journal of Guidance, Control, and Dynamics, 30(4), 1193-1197. Where appropriate, this average is weighted by covariance matrices associated with the data {circumflex over (X)}1+and {circumflex over (X)}2+expressing the uncertainty of these estimations. These covariance matrices are also produced by the two invariant Kalman filters of the methods100and200. The invention is not limited to the embodiments described above. It is possible to include in the navigation state of the fleet velocities or the vectors representing the characteristic point positions qi(or daymarks). This navigation state may be limited to comprising position and rotation data. Furthermore, the considered fleet of vehicles can comprise several secondary vehicles, and the navigation state can be extended so as to comprise elements specific to each of the secondary vehicles of the fleet. In addition, it is not mandatory to use inertial data acquired by an inertial unit during the implementation of either of the methods described above. However, when such inertial data are used, the state of the system should include states representative of the velocity of each vehicle equipped with an inertial unit. It will be:the velocity vpof the main vehicle in the reference frame,the velocity deviation vsp, relative to the fixed frame, between the two vehicles, projected in the frame attached to the secondary vehicle. To put it another way, vspis defined by vsp=RsT(vp−vs) where vpis the velocity of the main carrier in the fixed frame, vsthe velocity of the secondary carrier in the fixed frame and Rs=RpRspTthe rotation matrix allowing the switching of the coordinates of a point in the secondary frame to its coordinates in the fixed frame. Only one of these two states can also be added to the system. It should be noted that the navigation state could, in another embodiment, be formed by the natural variables (Rp, xp, Rs, xs), where the rotation matrix RSand the vector xs∈3are such that a point with coordinates u∈3in the frame attached to the secondary vehicle will have the coordinates Rsu+xsin the fixed frame. In this case, the binary operation to be used is more complex: (Rp,xp,Rs,xs)*(Rp′,xp′,Rs′,xs′)=(Rp⁢Rp′xp+Rp⁢xp′Rp⁢Rs′⁢RpT⁢RsRp⁢Rs′⁢RpT(xs-xp)+Rp⁢xs′+xp)
19,169
11860286
BEST MODE FOR CARRYING OUT THE INVENTION The following embodiments can improve accuracy of identifying the physical location of a vehicle, which enables vehicle movement control for operating or controlling physical movement of the vehicle without the use of expensive sensors that can reduce the overall reliability of the vehicle. The vehicle movement control can be based on a driver assisted or an autonomous vehicle driving process that is safe and reliable due to the accuracy of the location correction mechanism. The navigation system with location correction mechanism can maintain centimeter level accuracy without the addition of expensive and unreliable sensors that elevate the cost of ownership of the driver assisted or the autonomous vehicle. The vehicle movement control can further be based on accurately identifying physical location to centimeter accuracy on a real-time basis in order to assure the driver assisted or autonomous vehicle can be operated without risk of damage to the vehicle or any adjacent objects or property. The following embodiments are described in sufficient detail to enable those skilled in the art to make and use the invention. It is to be understood that other embodiments would be evident based on the present disclosure, and that system, process, or mechanical changes may be made without departing from the scope of an embodiment of the present invention. In the following description, numerous specific details are given to provide a thorough understanding of the invention. However, it will be apparent that the invention may be practiced without these specific details. In order to avoid obscuring an embodiment of the present invention, some well-known circuits, system configurations, and process steps are not disclosed in detail. The drawings showing embodiments of the system are semi-diagrammatic, and not to scale and, particularly, some of the dimensions are for the clarity of presentation and are shown exaggerated in the drawing figures. Similarly, although the views in the drawings for ease of description generally show similar orientations, this depiction in the figures is arbitrary for the most part. Generally, the invention can be operated in any orientation. The embodiments of various components as a matter of descriptive convenience and are not intended to have any other significance or provide limitations for an embodiment of the present invention. One skilled in the art would appreciate that the format with which navigation information is expressed is not critical to some embodiments of the invention. For example, in some embodiments, navigation information is presented in the format of (X, Y, Z); where X and Y and Z are three coordinates that define the geographic location, i.e., a position of a vehicle. The term “module” referred to herein can include or be implemented as or include software running on specialized hardware, hardware, or a combination thereof in the present invention in accordance with the context in which the term is used. For example, the software can be machine code, firmware, embedded code, and application software. The software can also include a function, a call to a function, a code block, or a combination thereof. Also, for example, the hardware can be gates, circuitry, processor, computer, integrated circuit, integrated circuit cores, memory devices. a pressure sensor, an inertial sensor, a microelectromechanical system (MEMS), passive devices, physical non-transitory memory medium including instructions for performing the software function, a portion therein, or a combination thereof to control one or more of the hardware units or circuits. Further, if a “unit” is written in the system claims section below, the “unit” is deemed to include hardware circuitry for the purposes and the scope of the system claims. The “units” in the following description of the embodiments can be hardware structures or functions coupled or attached to one another as described or as shown. The coupling or attachment can be direct or indirect without or with intervening items between coupled or attached modules or units. The coupling or attachment can be by physical contact or by communication between modules or units, such as wireless communication. It is also understood that the nouns or elements in the embodiments can be described as a singular instance. It is understood that the usage of singular is not limited to singular but the singular usage can be applicable to multiple instances for any particular noun or element in the application. The numerous instances can be the same or similar or can be different. Referring now toFIG.1, therein is shown a block diagram of a navigation system100with a location correction mechanism in an embodiment of the present invention. The navigation system100can include a first device102, such as a client or a server, connected to a second device106, such as a cloud server included in a cloud108of the second device106. The cloud108can be a loosely coupled computing structure, including the second device106that can provide computer resources and storage through a cloud network104. The navigation system100can include a base station110configured to communicate with a position satellite112. The first device102can communicate with the second device106through the cloud network104, such as a wireless or wired network of computing resources. The base station110can be a hardware structure, or tower, that includes electronics configured to communicate with the position satellite112, and the second device106. The position satellite112can be a hardware and electronic package orbiting the Earth at approximately kilometers above the surface of the Earth. The position satellite112is configured to provide a position signal114to the first device102. For example, the first device102can be of any of a variety of computing devices, such as a cellular phone, personal digital assistant, a notebook computer, a wearable device, internet of things (IoT) device, automotive telematics navigation system, or other multi-functional device. Also, for example, the first device102can include a device or a sub-system, an autonomous or self-maneuvering vehicle or object, a driver assisted vehicle, a remote-controlled vehicle or object, or a combination thereof. The first device102can couple, either directly or indirectly, to the cloud network104to communicate with the second device106or can be a stand-alone device. The first device102can further be separate form or incorporated with a vehicle, such as a car, truck, bus, or motorcycle. For illustrative purposes, the navigation system100is described with the first device102as a mobile computing device, although it is understood that the first device102can be different types of devices. For example, the first device102can also be a mobile computing device carried within the vehicle and configured to identify a physical position of the vehicle. The second device106can be any of a variety of centralized or decentralized computing devices. For example, the second device106can be a cloud computer, grid computing resources, a virtualized computer resource, cloud computing resource, routers, switches, peer-to-peer distributed computing devices, or a combination thereof. The second device106can be centralized in a single room, distributed across different rooms, distributed across different geographical locations, embedded within a telecommunications network. The second device106can couple with the cloud network104to communicate with the first device102. The second device106can also have significantly more computing power than the first device102. For illustrative purposes, the navigation system100is described with the second device106as a non-mobile cloud computing device, although it is understood that the second device106can be different types of computing devices. For example, the second device106can also be a mobile computing device, such as notebook computer, another client device, a wearable device, or a different type of client device. Also, for illustrative purposes, the navigation system100is described with the second device106as a computing device, although it is understood that the second device106can be different types of devices. Also, for illustrative purposes, the navigation system100is shown with the second device106and the first device102as endpoints of the cloud network104, although it is understood that the navigation system100can include a different partition between the first device102, the second device106, and the cloud network104. For example, the second device106can also function as part of the cloud network104. The cloud network104can span and represent a variety of networks. For example, the cloud network104can include wireless communication, wired communication, optical, ultrasonic, or the combination thereof. Satellite communication, cellular communication, Bluetooth, Infrared Data Association standard (IrDA), wireless fidelity (WiFi), and worldwide interoperability for microwave access (WiMAX) are examples of wireless communication that can be included in the communication path. Ethernet, digital subscriber line (DSL), fiber to the home (FTTH), and plain old telephone service (POTS) are examples of wired communication that can be included in the cloud network104. Further, the cloud network104can traverse a number of network topologies and distances. For example, the cloud network104can include direct connection, personal area network (PAN), local area network (LAN), metropolitan area network (MAN), wide area network (WAN), or a combination thereof. The navigation system100can provide additional features that are not available in prior art navigation systems. The first device102can be coupled to the positioning satellite112. The positioning satellite112can be a portion of a satellite array (not shown) that is configured to provide the physical position of the first device102. The communication between the first device102and the positioning satellite112can provide the position of the first device102within a five meter to ten meter accuracy. While this accuracy is sufficient for prompting a driver for changes in route, it cannot support an autonomous vehicle without adding expensive sensors that can increase the cost of the first device102and reduce the reliability due to sensor frailty. The inaccuracies of the position provided by the positioning satellite112can be caused by errors including satellite clock bias, satellite orbital error, ionospheric delay, tropospheric delay, multipath interference, and receiver thermal noise. The base station110can be located in an actual location111, such as a well-defined and known location. The base station110can be a satellite receiver/transmitter that samples the position signal114in order to mitigate as many of the inaccuracies as possible. The base station110can communicate the position signal114, received from the positioning satellite112, to the second device106for analysis and correction. The second device106can be coupled to or include a location correction module115, such as a signal evaluation module, that can be implemented in software running of specialized hardware, full hardware, or a combination thereof configured to analyze the position signal114. The location correction module115can compare the position signal114received from the first device102and the base station110. During a training process, the first device102can upload the position signal114to the second device106for further analysis or to generate a real-time kinematics (RTK) correction109. The location correction module115can include an artificial intelligence (AI) correction calculator116, such as a neural network, that can be trained to perform double difference calculations of the position signal114received from the base station110and the first device102. The AI correction calculator116can consider the first device102is in close proximity to the base station110, even with a distance of 100 kilometers between them, because the separation distance is insignificant relative to the altitude of the position satellite112, substantially 22,200 kilometers in altitude, resulting in a difference angle of less than a degree. This will allow the satellite orbital error, ionospheric delay, tropospheric delay, multipath interference to be cancelled out as being equal for the base station110and the first device102. Due to the geometric relationship between the first device102, the base station110, and the position satellite112, the values of the satellite orbital error, the ionospheric delay, the tropospheric delay, and the multipath interference are substantially equal and can be negated. The location correction module115can include the AI correction calculator116, a parameter storage118, and a satellite array storage120that can store parametric information for each of the position satellite112visible in the sky at a particular time. It is understood that the global positioning system requires four of the position satellites112in order to define a single point on the globe. The position satellite112can have a highly predictable orbit and can move through the region covered by the base station110in a known periodic fashion. The parameter storage118can be a volatile or non-volatile memory configured to maintain parameters and the RTK correction109, related to specific ones of the positioning satellite112, for a specific one of the base station110. It is understood that the second device106can provide the RTK correction109for several of the base station110and the array of the position satellite112. The satellite array storage120can be a volatile or non-volatile memory configured to store the frequency, orbital parameters, approximate altitude, and horizon timeline for each of the position satellite112that crosses over the region covered by the base station110. By way of an example, a standard global positioning system requires a minimum of four of the position satellite112in order to define a single point in the region. In order to support the minimum requirement, at least six of the position satellite112can be identified over the region at any time. Since the base station110is located at the actual location111, such as a well defined and precise location, the base station110can sample each of the position satellite112that services the region around the base station110. By periodically updating the parametric information from the position signal114, the base station110can allow the second device106to quickly respond with the RTK correction109for the first device102. Training of the AI correction calculator116can be accomplished by collecting readings from two of the base station110for one of the position satellite112that is visible to both. The second device106can monitor the position signal114received by both of the base station110. Since the actual location111of the base station110is known, the AI calculator can resolve the location discrepancies for both of the base station110. When the first device102enters the region within five to ten kilometers of the base station110, the first device102can relay its position, as determined by the position signal114, to the second device106for correction. The second device106can return the RTK correction109to the first device102. By applying the RTK correction109, the first device102can calibrate its position to within a few centimeters. It has been discovered that the navigation system100can reliably identify the RTK correction109in order to provide real-time updates of the actual position111for the first device102. The RTK correction109can be calculated by the AI correction calculator116over a fixed period of time in order to support the first device102. By sending the RTK correction109from the second device106, a communication can be distributed to other users of the navigation system100for determining their actual location111. The navigation system100can improve determination of the actual location111of the first device102, which can allow operation of autonomous vehicles without the support of expensive and unreliable sensors. Referring now toFIG.2, therein is shown an example a top plan view201of the navigation system100ofFIG.1in an embodiment. The navigation system100can include or interact with a satellite array202that communicates a satellite provided reference location204to the first device102. The satellite array202can include a first position satellite206, a second position satellite208, a third position satellite210, and a fourth position satellite212. It is understood that the identification of the satellite provided reference location204requires a minimum of four of the position satellite112ofFIG.1and additional ones of the position satellite112can be added for additional reliability. The satellite provided reference location204can only maintain an accuracy of one to five meters around the first device102. While this level accuracy was sufficient for navigation assistance of an operator based vehicle, it causes an autonomous vehicle to add expensive and unreliable optical and radar sensors to keep the vehicle safely within the lane markers and on the selected path. The first device102can be an object or a machine used for transporting people or goods capable of automatically maneuvering or operating the object or the machine. The first device102can include vehicles accessible by a user for control, maneuver, operation, or a combination thereof. For example, the first device102can include a car, a truck, a cart, a drone, or a combination thereof. For example, the first device102can include a self-driving vehicle, or a vehicle with automatic maneuvering features, such as smart cruise control or preventative breaking. The first device102can include a smart cruise control feature, capable of setting and adjusting the travel speed of the first device102. The satellite array202can also be in communication with a first base station214, located at a first actual position216, and a Qth base station218, located at a Qth actual position220. The first base station214and the Qth base station218can also be coupled to the second device106, located in the cloud108ofFIG.1. It is understood that the first base station214and the Qth base station218can each be communication with a different one of the second device106as well as each other. The second device106can utilize the first actual position216and the Qth actual position220to train the AI correction calculator116to correct the satellite provided reference location204sent from the first base station214and the Qth base station218. It is understood that the value of “Q” is a positive integer greater than 1. The addition of multiple instances of the base station110ofFIG.1can implement a sea of RTK cells by each of the base stations110communicating with the second device106through the cloud network104. When the first device102receives the satellite provided reference location204from the satellite array202, the satellite provided reference location204can be transferred to the first base station214through an over the air (OTA) communication222. The first base station214can forward the satellite provided reference location204to the second device106. The second device106can perform the RTK calculations by the AI correction calculator116ofFIG.1and instruct the first base station214to transfer the result of the AI correction calculator116to the first device102. The second device106can monitor the OTA communication222between the first base station214and the device102to determine when the first device102will cross a cell boundary226. The second device106can initiate a parameter transfer227between the first base station214and the Qth base station218in preparation for the first device102crossing from a first RTK cell224to a Qth RTK cell228. The parameter transfer227can include a pseudorange of the first position satellite206, a carrier phase207of the first position satellite206, a carrier phase ambiguity calculated by the AI correction calculator116, an estimated clock error of the first position satellite206, a computed position of the first device102in the first RTK cell224, and a list of the satellite array202used to locate the first device102. The first RTK cell224and Qth RTK cell228are defined to be geographic areas serviced by a specific subset of the satellite array202. The carrier phase207is defined as the shifting of the carrier frequency due to the delay generated by the atmospheric layers. It is understood that the carrier phase207can remain substantially constant within the first RTK cell224and Qth RTK cell228because the separation distance of 20 KM between the base station110and the first device102results in the elevation angle change, to the position satellite112, between the base station110and the first device102to only differ in the range of degrees. The AI correction calculator116of the second device106can perform the RTK calculation to determine the actual position111ofFIG.1of the first device102with centimeter accuracy. Under a single baseline condition, the global navigation satellite system (GNSS) receivers at both ends are named “base”, for the base station110, and “rover”, for the first device102, (subscripts “b” and “r” are used in equations) respectively in accordance with equation. The pseudorange “ρ” is defined as corrected range of the orbit altitude of the position satellite112, as shown in equations 1 and 2. The carrier phase207“ϕ” is defined as the shifting of the carrier frequency due to the delay generated by the atmospheric layers as shown in equations 3 and 4. The carrier phase207“ϕ” and the pseudorange “ρ” measurements from a certain satellite j of the satellite array202observed by the two receivers of the first base station214and the Qth base station218at a certain instance of time can be written as: ρj,r=λ−1(rj,r+Ij,r+Tj,r)+f(δtr−δtj)+εj,r(1) ρj,b=λ−1(rj,b+Ij,b+Tj,b)+f(δtb−δtj)+εj,b(2) ϕj,r=λ−1(rj,r−Ij,r+Tj,r)+f(δtr−δtj)+Nj,r+ηj,r(3) ϕj,b=λ−1(rj,b−Ij,b+Tj,b)+f(δtb−δtj)+Nj,b+ηj,b(4) where ρ and ϕ are pseudorange and carrier phase207measurements (unit: carrier cycle), respectively, λ is the carrier wavelength (unit: m), r represents the true geographical distance between the satellite and the receiver (unit: m), T is the tropospheric delay (unit: m), I is the ionospheric delay (unit: m), f is the carrier frequency (unit: Hz), δtr is the receiver clock error (unit: s), δtj is the satellite clock error (unit: s), N is the integer ambiguity, ε and η are measurement errors of pseudorange and carrier phase207respectively and their variances can be modeled as a simplified function of the elevation angle as given by Equation (5): σ2=a(b+b/sin2θ)/λ2(5) where θ is the elevation angle of the satellite, a and b can be set empirically, for example, we can select a=1 and b=9e−6for carrier phase207. It is understood that the difference in the elevation angle θ between the base station10and the first device102is substantially the same, because the separation of 10-20 kilometers as compared to the elevation of the position satellite112, which is substantially 20,200 kilometers. By way of an example, the elevation angle change between the base station110and the first device102can differ in the range of 0.028 degrees and 0.056 degrees. As such the tropospheric delay “T” and the ionospheric delay “I” can be assumed to be equal for the base station110and the first device102. The single-differenced (SD) measurement model can be obtained by subtracting the base receiver's from the rover receiver's measurements, i.e., Equation (1)−Equation (2) and Equation (3)−Equation (4): ρj,rb=λ−1(rj,rb+Ij,rb+Tj,rb)+fδtrb+εj,rb(6) ϕj,rb=λ−1(rj,rb−Ij,rb+Tj,rb)+fδtrb+Nj,rb+ηt,rb(7) where the subscript “rb” represents the difference between the corresponding terms of rover and base. It can be seen that δtj as a common error is eliminated by this differencing. In the above equation receiver and satellite clock offsets and hardware biases cancel out. The single difference ambiguities difference N12a−N12b is commonly parameterized as a new ambiguity parameter N12ab. The advantage of double differencing is that the new ambiguity parameter N12ab is an integer because the non-integer terms in the GPS carrier phase207observation, due to clock and hardware delays in the transmitter and receiver, are eliminated. Although it would be possible to estimate the double difference ambiguity using a float approach instead of an integer one, this would lead to less accuracy, such as dm-level accuracy instead of cm-level. Hence, standard RTK limits the ambiguities to integer figures. Since the first actual position216of the first base station214is known to the second device106, it can return the RTK correction109for the first device102based on corrections generated for the base station110. As the first device102proceeds through a first RTK cell224and approaches the cell boundary226, The second device106cause the first base station214to transfer the RTK parameters necessary for centimeter positioning to the Qth base station218in preparation for the first device102to enter the Qth RTK cell228. It is understood that the second device106can anticipate the transition of the first device102into the Qth RTK cell228. By way of an example, if the first device102is travelling at 60 miles per hour (MPH) approaching the cell boundary226, the first device102will travel just 1.056 inches in one millisecond. The first RTK cell224and the Qth RTK cell228can be formed around the first base station214and the Qth base station218, respectively. Additional base stations110can be added in the region to form a sea of RTK cells (not shown). In preparation for the first device102crossing into the Qth RTK cell228, the second device106can update critical parameters in the Qth base station218, including positions, pseudoranges, carrier phase207measurements, and corresponding integer ambiguity. The Qth base station218can receive the necessary parameters to continue to provide centimeter level position accuracy for the first device102. It has been discovered that the navigation system100can provide the RTK correction109for the first device102by transferring the satellite provided reference location204to the second device106. The second device106can be a cloud server as part of the cloud network104. The second device106can process the satellite provided reference location204through the AI correction calculator116and determine the RTK correction109. By transferring the RTK correction109back to the first device102, centimeter position accuracy can be maintained. The transfer of critical parameters to the Qth base station218, including positions, pseudoranges, carrier phase207measurements, a list of satellites117used from the satellite array202, and corresponding integer ambiguity that were calculated by the AI correction calculator116, allows the first device102to drive across the cell boundary226from the first RTK cell224to the Qth RTK cell228, while maintaining the centimeter level position accuracy. Referring now toFIG.3, therein is shown is an exemplary elevation map301processed by the navigation system100. The exemplary elevation map301can include the base station110fixed at the actual location111on the Earth surface302. The base station110can communicate with the second device106, located in the cloud108, by the over the air (OTA) communication222. The base station110can also communicate with the position satellite112and the first device102for determining the actual location of the first device102. The position satellite112can transmit the position signal114to both the first device102and the base station110. The position signal114must penetrate several layers of the atmosphere including a tropospheric layer304, a stratospheric layer306, an ionospheric layer308, and an exospheric layer310. The tropospheric layer304extends from the Earth surface302up to an altitude of approximately 25 kilometers, where it borders the stratospheric layer306. The tropospheric layer304presents a significant amount of electrical noise and radiated energy that can impact the position signal114by causing phase shift delay. The stratospheric layer306can be a benign layer that extends from the top of the tropospheric layer304at 25 kilometers up to approximately 50 kilometers. The stratospheric layer306can be electrically stable and poses little impact on the position signal114. The stratospheric layer306can border the ionospheric layer308, which extends from approximately 50 kilometers to 500 kilometers. The ionospheric layer308can contain high amounts of ionized gases, including the ozone layer. The ionospheric layer308can impact the position signal114due to ionic gases shifting the phase and delaying the position signal114. The exospheric layer310can border the ionospheric layer308and can extend from approximately 500 kilometers to 60,000 kilometers. The exospheric layer310can be a partial vacuum, containing rare occurrences of ionized gas molecules. The position satellite112can orbit the Earth surface302at approximately 20,200 kilometers. At this altitude, the position satellite112can complete two orbits of the Earth in a single day. The first device102can be in close proximity312, in the range of 10-20 kilometers, to the base station110. When the first device102receives the position signal114, the impact of the ionospheric layer308and the tropospheric layer304induces an amount of uncertainty in the position of the first device102. At the same time the base station110can receive the position signal114including the same uncertainty as the first device, but the actual position111of the base station110is well known to the second device106. The second device106can process the position signal114received by the base station110in order to identify the uncertainty that was induced by the ionospheric layer308and the tropospheric layer304. The second device106can use the AI correction calculator116ofFIG.1to identify the uncertainty based on knowing the actual position111of the base station. Since the first device102is subject to the same uncertainty that the base station110experienced, the same correction that was identified for the base station110can apply to the first device102. The angular difference shown inFIG.3is for ease of description. In actuality, the angular difference between the base station110to the position satellite112and the first device102to the position satellite112is in the order of 0.028 to 0.056 degrees. Based on this, the position signal114will experience the same amount and direction of uncertainty to both the base station110and the first device102. As such, the RTK correction109that applies to the base station110will correct the satellite provided reference location204ofFIG.2to a real-world location314of the first device102. The real-world location314is defined as the physical location on the surface of the Earth of the first device102provided with between one centimeter and three centimeters precision. It has been discovered that the navigation system100ofFIG.1can rely on the actual location111of the base station110, submitted to the AI correction calculator116to identify a pseudorange p316and the carrier phase ϕ207ofFIG.2from the position signal114. By way of an example, the pseudorange316and the carrier phase207identified for the base station110can be applied to the first device102in order to calculate the real-world location314of the first device102. Since the second device106performs the heavy computations through the AI correction calculator116, the complexity of the first device102can be reduced to increase reliability and reduce cost by eliminating expensive optical and radar sensors that would be used to guide the first device102based on the less accurate satellite provided reference location204ofFIG.2. Referring now toFIG.4, therein is shown an exemplary elevation map401of the satellite array202processed by the navigation system100ofFIG.1. The exemplary elevation map401of the satellite array202depicts the first position satellite206and the second position satellite208in communication with the first base station214and the first device102. The first base station214can also communicate with the cloud108by the OTA communication222to access the second device106. The first base station214can selectively process the position signal114from the first position satellite206. The first base station214can sent the position signal114by way of the OTA communication222to the second device106. The second device106can calculate the RTK correction109for the first position satellite206and the first device102. When the RTK correction109for the first position satellite206is completed, the first base station214can select a second position signal402for processing by the second device106. By way of an example, the AI correction calculator116ofFIG.1can use the same process and equations applied to the first position satellite206on the second position satellite208. Since the viewing angle is different between the first base station214and the first position satellite206is significantly different from the viewing angle between the first base station214and the second position satellite208, the amount of uncertainty applied to the second position signal402can be different from that of the first position signal114. The first base station214can communicate the RTK correction109for each of the position satellites112ofFIG.1in the satellite array202. The second device106can store the critical parameters including the pseudorange316ofFIG.3and the carrier phase207ofFIG.2identified for the first base station214in the parameter storage118ofFIG.1and the schedule of access for the first satellite206and the second satellite208can be stored in the satellite array storage120ofFIG.1. When the AI correction calculator116is in training, pairs of the base station110ofFIG.1can communicate concurrent information for each of the position satellite112ofFIG.1in the satellite array202. The RTK correction109for each of the position satellite112in the satellite array202can be stored in the satellite array storage120for faster retrieval once training is complete. It has been discovered that second device106located in the cloud108can quickly retrieve critical parameters including actual location111ofFIG.1of the base station110, pseudorange316ofFIG.3of the position satellite112, carrier phase207measurements, and corresponding integer ambiguity. The storage of relevant information for each of the position satellite112in the satellite array202can decrease the time spent in calculating the RTK correction109. The storage of the critical parameters can simplify the forwarding of the centimeter level position support when the first device102crosses from the first RTK cell224ofFIG.2to the Qth RTK cell228ofFIG.2. Since the critical parameters are transferred from the first base station214to the Qth base station218ofFIG.2, a quick retrieval of those parameters from the second device106can shorten the time required for the transfer and maintain the centimeter level position monitoring when crossing the cell barrier226ofFIG.2. Referring now toFIG.5, therein is shown an exemplary block diagram of the navigation system100in an embodiment. The navigation system100can include the first device102, the cloud network104, and the second device106. The first device102can send information in a first device transmission508over the cloud network104to the second device106. The second device106can send information in a second device transmission510over the cloud network104to the first device102or to the base station110ofFIG.1. For illustrative purposes, the navigation system100is shown with the first device102as a client device, although it is understood that the navigation system100can include the first device102as a different type of device. For example, the first device102can be a server containing the first display interface530coupled to a position display502. The position display502can include a monitor, projector, heads-up display, or a liquid crystal display configured to display the actual position111of the first device102 Also, for illustrative purposes, the navigation system100is shown with the second device106as a cloud server, although it is understood that the navigation system100can include the second device106as a different type of device. For example, the second device106can be a client device. The second device106can provide training and enhancement of the AI correction calculator116ofFIG.1. Also, for illustrative purposes, the navigation system100is shown with interaction between the first device102and the second device106. However, it is understood that the first device102can be a part of or the entirety of an autonomous vehicle, a smart vehicle, or a combination thereof. Similarly, the second device106can similarly interact with the first device102representing the autonomous vehicle, the intelligent vehicle, or a combination thereof. For brevity of description in this embodiment of the present invention, the first device102will be described as a client device and the second device106will be described as a cloud server device. The embodiment of the present invention is not limited to this selection for the type of devices. The selection is an example of an embodiment of the present invention. The first device102can include a first control circuit512, a first storage circuit514, a first communication circuit516, a first interface circuit518, and a first location circuit520. The first control circuit512can include a first control interface522. The first control circuit512can execute a first software526to provide the intelligence of the first device102. The first control circuit512can be implemented in a number of different manners. For example, the first control circuit512can be a processor, an application specific integrated circuit (ASIC) an embedded processor, a microprocessor, a hardware control logic, a hardware finite state machine (FSM), a digital signal processor (DSP), or a combination thereof. The first control interface522can be used for communication between the first control circuit512and other functional units or circuits in the first device102. The first control interface522can also be used for communication that is external to the first device102. The first control interface522can receive information from the other functional units/circuits or from external sources, or can transmit information to the other functional units/circuits or to external destinations. The external sources and the external destinations refer to sources and destinations external to the first device102. The first control interface522can be implemented in different ways and can include different implementations depending on which functional units/circuits or external units/circuits are being interfaced with the first control interface522. For example, the first control interface522can be implemented with a pressure sensor, an inertial sensor, a microelectromechanical system (MEMS), optical circuitry, waveguides, wireless circuitry, wireline circuitry, or a combination thereof. The first storage circuit514can store the first software526. The first storage circuit514can also store the relevant information, such as data representing incoming images, satellite data representing the satellite array202ofFIG.2, sound files, or a combination thereof. The first storage circuit514can be a volatile memory, a nonvolatile memory, an internal memory, an external memory, or a combination thereof. For example, the first storage circuit514can be a nonvolatile storage such as non-volatile random-access memory (NVRAM), Flash memory, disk storage, or a volatile storage such as static random-access memory (SRAM). The first storage circuit514can include a first storage interface524. The first storage interface524can be used for communication between the first storage circuit514and other functional units or circuits in the first device102, such as the sensor data local storage108ofFIG.1. The first storage interface524can also be used for communication that is external to the first device102. The first storage interface524can receive information from the other functional units/circuits or from external sources, or can transmit information to the other functional units/circuits or to external destinations. The external sources and the external destinations refer to sources and destinations external to the first device102. The first storage interface524can include different implementations depending on which functional units/circuits or external units/circuits are being interfaced with the first storage circuit514. The first storage interface524can be implemented with technologies and techniques similar to the implementation of the first control interface522. The first communication circuit516can enable external communication to and from the first device102. For example, the first communication circuit516can permit the first device102to communicate with the second device106and the cloud network104. By way of an example, the first communication circuit516can transfer the satellite provided reference location204to the second device106for correction by the satellite correction module115. The first communication circuit516can also function as a communication hub allowing the first device102to function as part of the cloud network104and not limited to be an endpoint or terminal circuit to the cloud network104. The first communication circuit516can include active and passive components, such as microelectronics or an antenna, for interaction with the cloud network104. The first communication circuit516can include a first communication interface528. The first communication interface528can be used for communication between the first communication circuit516and other functional units or circuits in the first device102. By way of an example, the first communication interface528can receive the satellite provided reference location204from the first control circuit512. The first communication interface528can receive information from the second device106for distribution to the other functional units/circuits or can transmit information to the other functional units or circuits. The first communication interface528can include different implementations depending on which functional units or circuits are being interfaced with the first communication circuit516. The first communication interface528can be implemented with technologies and techniques similar to the implementation of the first control interface522. The first interface circuit518allows the base station110ofFIG.1to interface and interact with the first device102. The first interface circuit518can include an input device and an output device. Examples of the input device of the first interface circuit518can include a keypad, a touchpad, soft-keys, a keyboard, a microphone, an infrared sensor for receiving remote signals, the wireless receiver, or any combination thereof to provide data and communication inputs. The first interface circuit518can pass the input from the base station110to the first control circuit512for processing and storage. The first interface circuit518can include a first display interface530. The first display interface530can include an output device. The first display interface530can couple the position display502including a projector, a video screen, a touch screen, a speaker, and combinations thereof. The first control circuit512can operate the first interface circuit518to display information generated by the navigation system100. The first control circuit512can also execute the first software526for the other functions of the navigation system100, including receiving location information from the first location circuit520. The first control circuit512can further execute the first software526for interaction with the cloud network104via the first communication circuit516. The first control unit512can operate the RTK correction109for determining the actual position111. The first control circuit512can operate the first interface circuit518to collect data from the base station110. The first control circuit512can also receive location information from the first location circuit520. The first location circuit520can generate location information in the real-world coordinates, current heading, current acceleration, and current speed of the first device102, as examples. The first location circuit520can be implemented in many ways. For example, the first location circuit520can function as at least a part of the global positioning system, an inertial navigation system, a cellular-tower location system, a gyroscope, or any combination thereof. Also, for example, the first location circuit520can utilize components such as an accelerometer, gyroscope, or global positioning system (GPS) receiver. The first location circuit520can include a first location interface532. The first location interface532can be used for communication between the first location circuit520and other functional units or circuits in the first device102, including the optical sensor110. The first location interface532can receive information from the other functional units/circuits or from external sources, or can transmit information to the other functional units/circuits or to external destinations. The external sources and the external destinations refer to sources and destinations external to the first device102, such as the position satellite112. The first location interface532can include different implementations depending on which functional units/circuits or external units/circuits are being interfaced with the first location circuit520. The first location interface532can be implemented with technologies and techniques similar to the implementation of the first control circuit512. The second device106can be optimized for implementing an embodiment of the present invention in a multiple device embodiment with the first device102. The second device106can provide the additional or higher performance processing power compared to the first device102. The second device106can include a second control circuit534, a second communication circuit536, a second user interface538, and a second storage circuit546. The second user interface538allows an operator (not shown) to interface and interact with the second device106. The second user interface538can include an input device and an output device. Examples of the input device of the second user interface538can include a keypad, a touchpad, soft-keys, a keyboard, a microphone, or any combination thereof to provide data and communication inputs. Examples of the output device of the second user interface538can include a second display interface540. The second display interface540can include a display, a projector, a video screen, a speaker, or any combination thereof. During the training process, the second control circuit534can receive a base station data535through the second communication circuit536. By way of an example, the second control circuit536can receive the base station data535from the first base station214ofFIG.2and the Qth base station218ofFIG.2. The first base station214and the Qth base station218can communicate their base station data535, including the satellite provided reference location204and the actual location111to the second device106. The base station data535can be passed to the satellite correction module115for training the AI correction calculator116ofFIG.1The actual location111of the first base station214and the Qth base station218is known to the satellite correction module115. The base station data535can include the first actual position216of the first base station214, and the Qth actual position220of the Qth base station218, and a satellite provided reference location204ofFIG.2, for the first base station214and the Qth base station218, received from satellite array202. The satellite correction module115can resolve the critical parameters that can convert the satellite provided reference location204to the actual location111of the base station. The critical parameters can be compiled in the RTK correction109that is sent to the first device102. The second control circuit534can execute a second software542to provide the intelligence of the second device106of the navigation system100. The second software542can operate in conjunction with the first software526. The second control circuit534can provide additional performance compared to the first control circuit512. The second control circuit534can operate the second user interface538to display information. The second control circuit534can also execute the second software542for the other functions of the navigation system100, including operating the second communication circuit536to communicate with the first device102over the cloud network104. The second control circuit534can be implemented in a number of different manners. For example, the second control circuit534can be a processor, an embedded processor, a microprocessor, hardware control logic, a hardware finite state machine (FSM), a digital signal processor (DSP), or a combination thereof. The second control circuit534can include a second control interface544. The second control interface544can be used for communication between the second control circuit534and other functional units or circuits in the second device106. The second control interface544can also be used for communication that is external to the second device106. The second control interface544can receive information from the other functional units/circuits or from external sources, or can transmit information to the other functional units/circuits or to external destinations. The external sources and the external destinations refer to sources and destinations external to the second device106. The second control interface544can be implemented in different ways and can include different implementations depending on which functional units/circuits or external units/circuits are being interfaced with the second control interface544. For example, the second control interface544can be implemented with a pressure sensor, an inertial sensor, a microelectromechanical system (MEMS), optical circuitry, waveguides, wireless circuitry, wireline circuitry, or a combination thereof. The second storage circuit546can store the second software542. The second storage circuit546can also store the information such as data representing incoming images, data representing previously presented image, sound files, or a combination thereof. The second storage circuit546can be sized to provide the additional storage capacity to supplement the first storage circuit514. During the training process the second storage circuit546can receive the base station data535for two or more of the position satellite110in the satellite array202ofFIG.2. For illustrative purposes, the second storage circuit546is shown as a single element, although it is understood that the second storage circuit546can be a distribution of storage elements. Also, for illustrative purposes, the navigation system100is shown with the second storage circuit546as a single hierarchy storage system, although it is understood that the navigation system100can include the second storage circuit546in a different configuration. For example, the second storage circuit546can be formed with different storage technologies forming a memory hierarchal system including different levels of caching, main memory, rotating media, or off-line storage. The second storage circuit546can be a controller of a volatile memory, a nonvolatile memory, an internal memory, an external memory, or a combination thereof. For example, the second storage circuit546can be a controller of a nonvolatile storage such as non-volatile random-access memory (NVRAM), Flash memory, disk storage, or a volatile storage such as static random access memory (SRAM). The second storage interface548can receive information from the other functional units/circuits or from external sources, or can transmit information to the other functional units/circuits or to external destinations. The external sources and the external destinations refer to sources and destinations external to the second device106. The second storage interface548can include different implementations depending on which functional units/circuits or external units/circuits are being interfaced with the second storage circuit546. The second storage interface548can be implemented with technologies and techniques similar to the implementation of the second control interface544. The second communication circuit536can enable external communication to and from the second device106. For example, the second communication circuit536can permit the second device106to communicate with the first device102over the cloud network104. By way of an example, the second device106can provide the RTK correction109to the first device102in order to correct the satellite provided reference location204of the first device102to the real-world location314ofFIG.3. The second communication circuit536can also function as a communication hub allowing the second device106to function as part of the cloud network104and not limited to be an endpoint or terminal unit or circuit to the cloud network104. The second communication circuit536can include active and passive components, such as microelectronics or an antenna, for interaction with the cloud network104. The second communication circuit536can include a second communication interface550. The second communication interface550can be used for communication between the second communication circuit536and other functional units or circuits in the second device106. The second communication interface550can receive information from the other functional units/circuits or can transmit information to the other functional units or circuits. The second communication interface550can include different implementations depending on which functional units or circuits are being interfaced with the second communication circuit536. The second communication interface550can be implemented with technologies and techniques similar to the implementation of the second control interface544. The second communication circuit536can couple with the cloud network104to send information to the first device102, including the updates for the location correction module115in the second device transmission510. The first device102can receive information in the first communication circuit516from the second device transmission510of the cloud network104. The navigation system100can be executed by the first control circuit512, the second control circuit534, or a combination thereof. For illustrative purposes, the second device106is shown with the partition containing the second user interface538, the second storage circuit546, the second control circuit534, and the second communication circuit536, although it is understood that the second device106can include a different partition. For example, the second software542can be partitioned differently such that some or all of its function can be in the second control circuit534and the second communication circuit536. Also, the second device106can include other functional units or circuits not shown inFIG.5for clarity. The functional units or circuits in the first device102can work individually and independently of the other functional units or circuits. The first device102can work individually and independently from the second device106and the cloud network104. The functional units or circuits in the second device106can work individually and independently of the other functional units or circuits. The second device106can work individually and independently from the first device102and the cloud network104. The functional units or circuits described for the first device106and the second device106can be implemented in hardware. For example, one or more of the functional units or circuits can be implemented using a gate array, an application specific integrated circuit (ASIC), circuitry, a processor, a computer, integrated circuit, integrated circuit cores, a pressure sensor, an inertial sensor, a microelectromechanical system (MEMS), a passive device, a physical non-transitory memory medium containing instructions for performing the software function, a portion therein, or a combination thereof. For illustrative purposes, the navigation system100is described by operation of the first device102and the second device106. It is understood that the first device102and the second device106can operate any of the modules and functions of the navigation system100. By way of a further example, the first device102can be the autonomous vehicle or the driver assisted vehicle. The first interface circuit518can receive input from the base station110. The actual location111can be generated by the first control circuit512from the RTK correction109generated by the second device106. It has been discovered that the second device106can receive the base station data535in order to calculate the RTK correction109for each of the base station110and each of the position satellites110in the satellite array202. As an example, the second control circuit534can pass the base station data535to the location correction module115for analysis. The location correction module115can generate the RTK correction109by calculating the critical parameters that convert the satellite provided reference location204to the actual location111. It is understood that the processes of the conversion of the satellite provided reference location204to the actual location111is timing critical and the AI correction calculator116ofFIG.1can include hardware and software as necessary to complete the conversion in the shortest time possible. Referring now toFIG.6, therein is shown a control flow601of the navigation system100ofFIG.1in an embodiment of the present invention. The control flow601depicts an establish link to vehicle module602, in which the first base station214ofFIG.2becomes aware of the first device102ofFIG.1through the OTA communication222ofFIG.2. The control flow601proceeds to a determine RTK parameters module604, in which the first base station214communicates with the satellite array202ofFIG.2in order to calculate the pseudorange316ofFIG.3, the carrier phase207, the carrier phase ambiguity, the estimated clock error, a list of the satellite array202used to locate the first device102, and the satellite provided reference location204ofFIG.2for the first device within the first RTK cell224ofFIG.2. The satellite provided reference location204can provide a one to two meters precision, which is insufficient for the first device102, such as an autonomous vehicle or a driver assisted vehicle. The control flow601proceeds to a determine real-world position vehicle module606, in which the second device106ofFIG.1can perform the RTK calculations by the AI correction calculator116ofFIG.1to calculate the real-world location314ofFIG.3of the first device102. The AI correction calculator116can refine the real-world location314to a one centimeter and three centimeters precision. The control flow601proceeds to a transfer RTK parameters to vehicle module608, in which the second device106can instruct the first base station214to transfer the real-world location314to the first device102by the OTA communication222. The second device106can monitor the OTA communication222in order to determine the progress of the first device through the first RTK cell224. The control flow then proceeds to a determine approaching a cell boundary module610, in which the second device106can determine that the first device102is about to cross a cell boundary226ofFIG.2from the first RTK cell224into the Qth RTK cell228ofFIG.2. By way of an example, with the first device102travelling at 60 miles per hour (MPH), it can only travel 1.056 inches in one millisecond. This provides ample time for the second device to analyze the route of the first device102relative to the boundaries of the first RTK cell224. When the first device102has passed the last possible route within the first RTK cell224, the second device can prepare for the crossing into the Qth RTK cell228by compiling the list of the critical parameters used in the first RTK cell224, including the real-world location314, pseudoranges316, carrier phase207measurements, a list of the satellite array202used to locate the first device102, and corresponding integer ambiguity that were calculated by the AI correction calculator116. The control flow then proceeds to a transfer real-world position and RTK parameters module612, in which the second device106can cause the first base station214to transfer the critical parameters from the first base station224to the Qth base station218through the parameter transfer227ofFIG.2. The Qth base station218can prepare for the transition of the first device102ahead of the actual crossing of the cell boundary226. This allows a seamless transfer of the real-world position314without causing a delay for recalibration of the real-world position314. When the first device crosses into the Qth RTK cell228ofFIG.2, the real-world position314of the first device102is continuously known. The control flow then proceeds to a maintain real-world position after crossing cell boundary module614, wherein the second device106can monitor the progress of the first device102through the Qth base station218and can support modifications of the list of the satellite array202used to guide the first device102, changes in the carrier phase207, and prepare for crossing the subsequent one of the cell boundary226. It is understood that each of the modules listed in the above description can utilize software executed in a specific hardware configuration. By way of an example, the first control circuit512can communicate with the second device106, the first base station224, the Qth base station218, the satellite array202, and combinations thereof through the first communication unit516. As a further example, the second control circuit534can communicate with the first device102, the first base station224, the Qth base station218, the satellite array202, and combinations thereof through the second communication circuit536. It has been discovered that the navigation system100can provide continuous monitoring of the real-world position314of the first device102with one centimeter to three centimeters precision across regions of a route that crosses the cell boundary226. The capability to transfer ahead the critical parameters required to calculate the real-world position314prior to the actual crossing of the cell boundary226can enable the first device102to maintain autonomous control across the entire region of the route. Referring now toFIG.7, therein is shown a flow chart of a method700of operation of a navigation system100ofFIG.1in an embodiment of the present invention. The method700includes: receiving a base station data including an actual location and a satellite provided reference location in block702; transferring the base station data to an artificial intelligence (AI) correction calculator, already trained in a block704; transferring a pseudorange, of a satellite, from the AI correction calculator in a block706; calculating a real-time kinematics (RTK) correction based on the pseudorange in a block708; and enabling the communication circuit to transmit the RTK correction by an over the air (OTA) communication to the base station including the base station transferring the RTK correction to a device for correcting the satellite provided reference location to a real-world location and displaying on the device in a block710. The resulting method, process, apparatus, device, product, and/or system is straightforward, cost-effective, uncomplicated, highly versatile, accurate, sensitive, and effective, and can be implemented by adapting known components for ready, efficient, and economical manufacturing, application, and utilization. Another important aspect of an embodiment of the present invention is that it valuably supports and services the historical trend of reducing costs, simplifying systems, and increasing performance. These and other valuable aspects of an embodiment of the present invention consequently further the state of the technology to at least the next level. While the invention has been described in conjunction with a specific best mode, it is to be understood that many alternatives, modifications, and variations will be apparent to those skilled in the art in light of the foregoing description. Accordingly, it is intended to embrace all such alternatives, modifications, and variations that fall within the scope of the included claims. All matters set forth herein or shown in the accompanying drawings are to be interpreted in an illustrative and non-limiting sense.
65,267
11860287
DESCRIPTION OF THE PREFERRED EMBODIMENTS The following description of the preferred embodiments of the invention is not intended to limit the invention to these preferred embodiments, but rather to enable any person skilled in the art to make and use this invention. 1. Overview. As shown inFIG.1, the system10can include one or more data sources (e.g., satellites, reference stations, etc.), one or more computing systems200, one or more GNSS receivers100, one or more sensors300, and/or any suitable components. As shown inFIG.2, the method20can include receiving datasets S100, determining a receiver kinematic solution S200, detecting outliers S300, mitigating an effect of the outliers S400, determining an updated receiver position S500, and/or any suitable steps. The system and method preferably function to detect (e.g., determine, identify, measure, etc.) outlier(s) in GNSS observations to enable determination of a receiver kinematic solution with a high accuracy (e.g., cm-level, dm-level, m-level, etc.), availability, integrity, and/or other property. However, the GNSS observations can be otherwise used. Embodiments of the system and/or method can be used, for example, in autonomous or semi-autonomous vehicle guidance (e.g., for unmanned aerial vehicles (UAVs), unmanned aerial systems (UAS), self-driving cars, agricultural equipment, robotics, rail transport/transit systems, autonomous trucking, last mile delivery, etc.), GPS/GNSS research, surveying systems, user devices, mobile applications, internet-of-things (IOT) devices, and/or may be used in any other suitable application. In specific examples, the system (and/or components) can be coupled to any suitable external system such as a vehicle (e.g., UAV, UAS, car, truck, etc.), robot, railcar, user device (e.g., cell phone), and/or any suitable system, and can provide positioning data, integrity data (e.g., protection level data), and/or other data to said system, wherein the system can use the data for control and/or navigation. 2. Benefits. Variations of the technology can confer several benefits and/or advantages. First, variants of the technology can enable improved outlier detection in GNSS measurements. In specific examples, the technology can enable improved outlier detection using sensor fusion positioning estimates in a filter without a fully tightly coupled solution (e.g., without a circular feedback loop). Second variants of the technology can improve and/or assist with carrier phase ambiguity resolution. In a specific example, by using a duplicate filter the outlier detection is less likely to disturb an ambiguity state (e.g., carrier phase ambiguity) as compared to performing the outlier detection within the primary filter. Because the ambiguity state is less likely to be disturbed, better performance (e.g., greater availability, greater accuracy, etc. from anticipated lower quality satellites, from fewer satellites, etc.) can be realized. Additionally, or alternatively, ambiguity fixes that may not have been possible can be achieved. Third, variants of the technology can enable looser thresholds on a number of satellites and/or fraction of satellite observations observed to be outliers for updating a sensor fusion filter. For instance, rather than using satellite observations associated with at least 4 (and typically at least 5 or 6 to include redundancy) distinct satellites to update a sensor fusion filter, examples of the technology can enable the sensor fusion filter to be updated with <4 distinct satellites (e.g., 1, 2, 3 satellites). In another example, an outlier fraction threshold (e.g., a threshold for what percentage of the satellite signals can be identified as potential, probable, definite, etc. outliers before the positioning engine and/or sensor fusion engine does not perform an update using the satellite signals) can be increased. For instance, an outlier fraction threshold can be 25%, 33%, 40%, 50%, 60%, 75%, 80%, 90%, 95%, and/or any suitable percentage. This technical advantage can be achieved as variants of the technology can be more accurate at detecting outliers and/or inliers. However, variants of the technology can confer any other suitable benefits and/or advantages. 3. System. As shown inFIG.1, the system can include one or more data sources, one or more computing systems, one or more sensors (e.g., IMU sensor, DMI sensor, wheel tick, etc.), and/or one or more GNSS receivers. The system can function to determine kinematic properties of a receiver (e.g., position, velocity, acceleration, jerk, jounce, snap, crackle, pop, higher derivatives of position with respect to time, attitude, elevation, altitude, etc.), detect outliers in GNSS observations, and/or can otherwise function. The system preferably uses a set of data collected by one or more data sources. Data sources can include: receivers, sensors (e.g., located onboard the receiver, the external system, the reference stations, etc.), databases, satellites, reference stations, and/or any other suitable data source. Examples of data that can be used include: GNSS observations, sensor observations, and/or any other suitable data. The receiver100preferably functions to receive a set of GNSS observations (e.g., satellite signals such as carrier phase and satellite code) from one or more satellites. In variants, the receiver can determine the location of the receiver (and/or external system) based on the GNSS observations. The receiver is preferably in communication with the computing system. However, the receiver can be integrated with the computing system, and/or the receiver and computing system can be arranged in any suitable manner. The receiver is preferably a stand-alone device (e.g., a GNSS receiver, antenna). However, the receiver can be integrated into an external system (e.g., be a component of an automobile, aero vehicle, nautical vehicle, mobile device, etc.), can be a user device (e.g., smart phone, laptop, cell phone, smart watch, etc.), and/or can be configured in any suitable manner. The set of GNSS observations can include orbital data (e.g., ephemeris), timestamp, range rate data, carrier phase data, pseudorange data, and/or any suitable data. The set of GNSS observations can include and/or be associated with metadata (e.g., ephemeris data) and/or any suitable data or information. The set of GNSS observations preferably includes GNSS observations corresponding to satellites from a plurality of satellite constellations (e.g., Global Positioning System (GPS), GLObal Navigation Satellite System (GLONASS), BeiDou navigation satellite System (BDS), Galileo, Quasi-Zenith Satellite System (QZSS), etc.). However, the set of GNSS observations can correspond to satellites from a single satellite constellation, can include data from an augmentation system (e.g., Satellite Based Augmentation System (SBAS) such as Wide Area Augmentation System (WAAS), European Geostationary Navigation Overlay Service (EGNOS), Multi-Functional Satellite Augmentation System (MSAS), GPS Aided Geo Augmented Navigation (GAGAN), Omnistar, StarFire, etc.; Ground Based Augmentation Systems (GBAS) such as Local Area Augmentation System (LAAS); etc.), and/or can include any suitable data. In variants of the system including more than one receiver, each receiver can be configured to receive GNSS observations corresponding to a satellite constellation, to a carrier frequency (e.g., the L1, L2, L5, E1, E5a, E5b, E5ab, E6, G1, G2, G3, B1, B2, B2a, B2b, B2ab, B3, LEX, etc. frequencies), and/or corresponding to any suitable source. The reference station(s) preferably function to receive a set of GNSS observations (e.g., reference station GNSS observations) and transmit the reference station GNSS observations to the computing system (and/or to the receiver). The GNSS observations from the reference station(s) can be used to determine corrections (e.g., local and/or global corrections such as to account for atmospheric effects such as ionosphere delay, troposphere delay, ionosphere gradient, etc.; orbit errors; clock errors; hardware biases; antenna offsets such as phase center offset, phase center variation, etc.; ocean tides; pole tides; solid Earth tides; etc.) to the set of GNSS observations measured (or otherwise received) by the receiver. The sensor(s)300preferably function to measure sensor data associated with the external system and/or the GNSS receiver. The sensor data is preferably used to determine (e.g., independent of the GNSS observations) the external system (or the sensor) kinematic parameters, but can additionally or alternatively be used to assist (e.g., speed-up, correct, refine, etc.) the calculation (e.g., calculating the state vector, estimating the phase ambiguity) of kinematic parameters from the GNSS observations and/or be otherwise used. The sensors are preferably in communication with the computing system, but can be integrated into the computing system, connected to the computing system, be separate from the computing system (e.g., connect to the computing system through an intermediary system), and/or can otherwise be arranged. The sensor(s) can be: on-board the external system, on-board a separate external system, integrated into the GNSS receiver, separate from the GNSS receiver, and/or otherwise associated with the GNSS receiver. The sensor data can include: inertial data (e.g., velocity, acceleration, angular velocity, angular acceleration, magnetic field, etc.), odometry, distance, pose (e.g., position, orientation, etc.), mapping data (e.g., images, point clouds), temperature, pressure, ambient light, landmarks (e.g., image key features), images, video feeds, and/or any other suitable data. The sensors can include one or more of: inertial measurement unit (IMU), accelerometer, gyroscope, magnetometer, odometer (e.g., wheel speeds; wheel ticks; steering angles; visual odometers such as cameras; etc.), distance measurement instrument (DMI), image sensor (e.g., camera, stereo camera, depth camera, etc.), pressure sensors, and/or any suitable sensor. The computing system200preferably functions to process the data (e.g., GNSS observations) from the receiver and/or the reference stations. The computing system can: aggregate the data (e.g., combine the receiver GNSS observations, reference station GNSS observations, and sensor data; reorganize the receiver GNSS observations, reference station GNSS observations, and sensor data such as based on the time stamp, time of transmission, time of receipt; etc.), filter the data (e.g., to calculate state vectors, ambiguities such as phase ambiguities, etc. associated with the data), calculate the receiver position (e.g., based on ambiguities), correct the data (e.g., correct the GNSS observations for orbit errors, clock errors, hardware biases, antenna offsets, atmospheric effects, ocean tides, pole tides, etc.), detect outliers (e.g., cycle slips, etc.), and/or can process the data in any suitable manner. The computing system can be local (e.g., on-board the external system, integrated in a receiver, integrated with a reference station, etc.), remote (e.g., cloud computing, server, networked, etc.), and/or distributed (e.g., between a remote and local computing system). The computing system is preferably communicably coupled to the receiver, to the reference station, but the computing system can be in communication with any suitable components. In variants, the computing system can include one or more: communication module, filter, outlier detection module, and/or any suitable modules. As shown for example inFIG.4, the computing system can include a positioning engine (e.g., including a filter such as a Kalman filter, extended Kalman filter, unscented Kalman filter, etc. configured to estimate a rover or receiver positioning solution based on the satellite observations), a fusion engine (e.g., a sensor fusion engine that can include a filter such as a Kalman filter, extended Kalman filter, unscented Kalman filter, etc. configured to estimate a rover or receiver fused positioning solution and/or sensor errors such as sensor bias using sensor measurements and a positioning solution from the positioning engine, using processed satellite observations such as disclosed in U.S. patent application Ser. No. 18/115,963 titled ‘SYSTEM AND METHOD FOR FUSING SENSOR AND SATELLITE MEASUREMENTS FOR POSITIONING DETERMINATION’ filed 1 Mar. 2023 which is incorporated in its entirety by this reference, etc.), a duplication module (e.g., configured to duplicate a filter of the positioning engine and/or fusion engine which can provide a technical advantage of decreasing a risk of data contamination and/or inability to fully roll back a filter update resulting from outlier detection), an outlier detector (e.g., configured to detect one or more outliers in the set of satellite observations, satellite signals, sensor data, sensor readings, sensor measurements, etc.), an outlier mitigator (e.g., configured to mitigate an impact of one or more outliers on the positioning solution and/or fused positioning solution), and/or any suitable modules and/or components. In some examples, the duplication module, the outlier detector, the outlier mitigator, and/or any suitable components can be integrated in the positioning engine and/or the fusion engine. However, the modules and/or engines can otherwise be integrated and/or isolated from one another. In a specific example, the computing system can include any suitable components as disclosed in U.S. patent application Ser. No. 18/073,304 titled “SYSTEM AND METHOD FOR FUSING SENSOR AND SATELLITE MEASUREMENTS FOR POSITIONING DETERMINATION” filed 1 Dec. 2022 which is incorporated in its entirety by this reference. 4. Method. As shown inFIG.2, the method20can include receiving dataset(s) S100, determining a receiver kinematic solution S200, detecting outliers S300, mitigating an effect of the outliers S400, determining an updated receiver position S500, and/or any suitable steps. The method preferably functions to determine (e.g., detect, measure, identify, analyze, etc.) outliers in a set of GNSS observations, where the GNSS observations (e.g., inliers of the GNSS observations) can be used to estimate (e.g., calculate, determine) a receiver position (e.g., as part of a positioning engine, fusion engine, etc.). Steps and/or substeps of the method can be performed iteratively (e.g., for different epochs, for the same epoch, etc.), sequentially (e.g., for different external systems), and/or in any suitable order. The steps and/or substeps of the method can be performed in series and/or in parallel. The steps and/or substeps are preferably performed by a system as described above, but can be performed by any system. Receiving datasets S10functions to measure, acquire, receive, access, etc. one or more sets of data that can be used to determine a receiver positioning solution. Exemplary datasets can include: GNSS observations (e.g., from a shared epoch; from one or more epochs such as sequential epochs, epochs spaced by a predetermined amount of time, etc.; etc.), sensor data (e.g., sensor readings, sensor measurements, etc.), external system data (e.g., steering wheel, map data, etc.), and/or any suitable data. Receiving the datasets can include transmitting the datasets to a computing system and/or to a receiver (e.g., from a database), monitoring the datasets (e.g., for a predetermined event, for faults, etc.), and/or any suitable steps. The datasets can be stored (e.g., temporarily stored such as in short-term memory, cache, etc.). When more than one dataset is received (e.g., GNSS observations and sensor data), the datasets are preferably received (e.g., acquired) contemporaneously (e.g., concurrently, simultaneously, etc.), but can be received in any order. Receiving datasets can include receiving the GNSS observations S100functions to measure and/or detect a set of satellite signals, where each satellite signal is associated with a satellite, at a reference station, a receiver, and/or at any suitable endpoint. The satellite signals can include satellite code, satellite pseudorange, carrier phase, and/or any suitable data. The GNSS observations are preferably received (e.g., acquired) at a GNSS acquisition frequency. The GNSS observations are preferably associated with a timestamp. The timestamp is preferably the time the GNSS observations were acquired but can be the time of receipt (e.g., by the computing system), the time of processing, and/or be any suitable time. Receiving datasets can include receiving sensor data, which functions to receive data from one or more sensors. The sensor data is preferably received by a computing system, but can be received by any suitable component. The sensor data can be received from a sensor, a computing system (e.g., database, etc.), and/or from any suitable system. The sensor data is preferably received (e.g., acquired) at a sensor acquisition frequency. The sensor acquisition frequency can be less than, the same as, and/or greater than the GNSS observation frequency. The sensor data is preferably associated with a timestamp. The timestamp is preferably the time the data was acquired but can be the time of receipt (e.g., by the computing system), the time of processing, and/or be any suitable time. In a first illustrative example, receiving sensor data can include receiving acceleration and/or rotation data from an accelerometer and/or gyroscope. In a second illustrative example, receiving sensor data can include acquiring one or more images, where the images can be processed (e.g., using artificial intelligence, manually, using image processing algorithms, using stereoimage algorithms, monocular vision algorithms, etc.). In a third illustrative example, receiving sensor data can include measuring or receiving wheel tick (or other odometry or distance measuring instrument) and/or steering angle data. In a fourth illustrative example, any or all of the preceding three examples can be combined. However, any suitable sensor data can be received. In some variants, receiving the sensor data can include processing the sensor data. For example, the sensor data can be (pre)processed to remove, account for, mitigate, and/or otherwise correct for sensor error terms (e.g., sensor bias, sensor thermal bias, scale factors, nonlinearities, nonorthogonalities, misalignments, g-sensitivity, g2-sensitivity, cross-axis sensitivity, etc.). However, the sensor data can be processed within a filter (e.g., an error estimator of a fusion engine, in S200, etc.) and/or can otherwise be processed. Receiving the datasets can include synchronizing the datasets, which can function to align datasets to a common time basis. For example, sensor data can be synchronized to (e.g., time aligned) to the GNSS observations, GNSS observations can be synchronized to the sensor data, GNSS observations and sensor data can be aligned to an external reference, a common reference can be used, and/or the datasets can otherwise be synchronized. Determining the receiver positioning solution S200functions to determine the GNSS receiver and/or rover position to high accuracy (e.g., receiver position is known to within 1 mm, 2 mm, 5 mm, 1 cm, 2 cm, 5 cm, 1 dm, 2 dm, 5 dm, 1 m, 2 m, 5 m, 10 m, values or ranges therebetween, etc.). The receiver positioning solution is preferably determined by the receiver (e.g., a computing system thereof, a positioning engine, a fusion engine, etc.), but can be determined by the computing system and/or any component. The receiver position solution can be determined using sensor data (e.g., using a fusion engine), GNSS observations (e.g., using a positioning engine), and/or any suitable data can be used (e.g., landmarks). The sensor data and the GNSS observations are preferably loosely coupled (e.g., a fusion filter ingests a position estimate determined based on the GNSS observations rather than the GNSS observations directly, as shown for example inFIG.3, etc.), but can be tightly coupled (e.g., a fusion filter can ingest raw GNSS observations and raw sensor data), semi-tightly coupled (e.g., a fusion filter can ingest processed GNSS observations and raw or processed sensor data), and/or otherwise be coupled when both data types are used to determine the receiver positioning solution. The receiver positioning solution can be determined using an estimator and/or any suitable method and/or algorithm. Exemplary estimators include: Kalman filters (e.g., unscented Kalman filters, extended Kalman filters, recursive Kalman filters, etc.), particle filters (e.g., monte carlo simulators), least squares solution calculators (e.g., running an iterative snapshot least squares method), a Gaussian process, and/or any suitable estimator can be used. Determining positioning solution using GNSS observations functions to determine the GNSS positioning solution of the external system, sensor, and/or GNSS receiver based on the GNSS observations. Determining the GNSS positioning solution is preferably performed as GNSS observations are received, but can be performed as the sensor data is received and/or with any suitable timing. The GNSS positioning solution can be determined in a manner analogous to the determination of position, velocity, acceleration, higher order derivatives of position with respect to time (e.g., jerk, jounce, snap, crackle, pop, etc.), attitude, and/or other suitable positioning solution terms as disclosed in U.S. patent application Ser. No. 16/685,927 filed 15 Nov. 2019 entitled “SYSTEM AND METHOD FOR SATELLITE POSITIONING,” U.S. patent application Ser. No. 16/817,196 filed 12 Mar. 2020 entitled “SYSTEMS AND METHODS FOR REAL TIME KINEMATIC SATELLITE POSITIONING,” and/or U.S. patent application Ser. No. 17/022,924 filed 16 Sep. 2020 entitled “SYSTEMS AND METHODS FOR HIGH-INTEGRITY SATELLITE POSITIONING,” each of which is incorporated in its entirety by this reference. However, the GNSS positioning solution can be determined from the GNSS observations in any manner. When GNSS observations associated with a plurality of GNSS receivers (e.g., antennas) are measured, determining the GNSS positioning solution can be performed independently for GNSS observations from different GNSS receivers and/or determining the GNSS positioning solution can be performed in a manner that merges the GNSS observations for different GNSS receivers. Determining the receiver positioning solution using GNSS observations can include: determining a carrier phase ambiguity (e.g., a float carrier phase ambiguity, an integer carrier phase ambiguity, etc.), calculating the receiver positioning solution based on the carrier phase ambiguity, determining a baseline vector between a receiver and a reference station, determining an absolute receiver positioning solution (e.g., by applying the baseline vector to the reference station location), determining a relative receiver positioning solution (e.g., relative to a prior time point, epoch, etc.), and/or any steps. Determining the receiver positioning solution using GNSS observations can use a GNSS estimator (e.g., an estimator that can ingest GNSS observations), a fusion estimator (e.g., an estimator that can ingest GNSS observations, sensor data, etc.), and/or any suitable estimator. Determining a positioning solution using sensor data functions to determine the positioning solution of the external system, sensor, and/or GNSS receiver based on the sensor data. Determining a sensor positioning solution (e.g., fused positioning solution) is preferably performed as sensor data is received, but can be performed at a delayed time (e.g., as GNSS observations are received), and/or with any suitable timing. The fused positioning solution is preferably determined using a fusion estimator (e.g., Kalman filter, extended Kalman filter, unscented Kalman filter, Gaussian process, etc. that ingests sensor readings, GNSS positioning solution, processed satellite observations, raw satellite observations, corrections information, reference station observations, etc. to determine the fused positioning solution, sensor error(s), positioning solution covariances, etc.). Determining a fused positioning solution preferably includes determining the positioning solution using a mechanization model and integrating the mechanized data. The mechanization model and/or integrator can account for earth rotation, Coriolis forces, gravity, and/or any other real or fictitious forces to calculate or update the fused positioning solution from a previously computed positioning solution. However, the positioning solution can be otherwise determined from the sensor data. When sensor data associated with a plurality of sensors is measured, determining a positioning solution can be performed independently for sensor data from different sensors, sensor data for different sensors can be merged and/determined in a unified manner (e.g., accounting for, leveraging, etc. a lever arm effect between sensors), and/or the kinematic parameters can otherwise be determined. The mechanization model preferably uses (e.g., leverages) small angle approximations, which can provide a technical advantage of simplifying the mechanization model (e.g., which can decrease a computational burden of the mechanization model). However, the mechanization model and/or integrator can otherwise function. In some variants, the datasets can be measured at different measurement frequencies. In these variants, S200can include separate lagging and real-time processes. For example, a GNSS positioning solution (and/or sensor error such as sensor bias) can be determined in a lagging process and a sensor fused position (e.g., position update using incoming sensor measurements) can be determined in a real-time process (with potential updates as new GNSS positioning solutions and/or sensor errors become available). However, S200can be performed with any suitable timing. S200can include converting the positioning solution from a GNSS receiver and/or sensor reference frame to a body (e.g., rover) reference frame (e.g., based on a pose, transformation, etc. between the data source and the body reference frame). In a specific example, the positioning solution (e.g., receiver positioning solution, rover positioning solution, GNSS positioning solution, fused positioning solution, kinematic solution, etc.) can be determined in a manner as disclosed in U.S. patent application Ser. No. 18/073,304 titled “SYSTEM AND METHOD FOR FUSING SENSOR AND SATELLITE MEASUREMENTS FOR POSITIONING DETERMINATION” filed 1 Dec. 2022 which is incorporated in its entirety by this reference (e.g., can include determining a motion state and modifying a filter based on the motion state, can include validating a positioning solution, fuse a GNSS positioning solution and sensor or fused positioning solution, etc.). However, the positioning solution can be determined in any manner. Detecting outliers in the GNSS observations S300functions to identify one or more GNSS observations as outliers. S300is preferably performed by a receiver (e.g., a computing system of a receiver, an outlier detector, etc.), but can be performed by a computing system (e.g., a remote computing system), and/or by any suitable component. The outliers can be detected by an estimator (e.g., of a positioning engine, of a fusion engine, etc.), by an outlier detector, and/or by any suitable module. Outliers can be detected sequentially, in parallel, and/or in any suitable order and/or timing. For example, a first outlier can be identified, the outlier can be mitigated (e.g., according to S400), and an estimator (e.g., a GNSS estimator, of a positioning engine, of a fusion engine, etc.) can produce an updated receiver positioning solution, where the updated receiver positioning solution can be used to check for a second (and so on) outlier. This process can be repeated (e.g., for a second, third, fourth, fifth, etc. outlier) until no outlier is detected. However, all outliers can be detected at the same time, and/or the outliers can be detected in any order and/or with any timing. The outliers are preferably detected based on the receiver positioning solution (e.g., in state space, in a solution space, fused positioning solution compared to individual satellite positioning solution, etc.). However, additionally or alternatively, the outliers can be detected in an observation space (e.g., directly in the GNSS observations such as by converting the fused positioning solution to a predicted GNSS observation for each satellite in view of the receiver), based on a receiver positioning displacement (e.g., which can be beneficial in some situations as carrier phase ambiguities do not necessarily need to be determined when a receiver displacement is used but can leverage or use differences in carrier phase between different epochs such as consecutive epochs, nonconsecutive epochs such as separated in time by an amount of time depending on an application, an amount of), and/or in any suitable manner. The outliers are preferably determined using (e.g., within, using a positioning solution calculated with, etc.) a second GNSS estimator (e.g., where the first, primary, etc. GNSS estimator is used to produce a GNSS receiver positioning solution that is transmitted to a fusion filter, used by an external system, etc.) and/or outlier detector associated therewith (as shown for example inFIG.5). The use of the second GNSS estimator (e.g., a duplicate of the first or primary GNSS estimator) can provide a technical advantage of reducing data contamination from reversing an estimator update as outlier(s) are detected. However, the outliers can be determined using the GNSS estimator (e.g., the primary GNSS estimator), a fusion estimator (e.g., a primary fusion estimator, a secondary fusion estimator, etc.), an independent outlier detector, and/or any suitable component. The second GNSS estimator can be a duplicate of the GNSS estimator, an independent GNSS estimator (e.g., operating with the same inputs, models, noise, etc.; configured to generate the same output given the same input as a primary GNSS estimator; etc.), and/or can be any suitable estimator. In an illustrative example, detecting an outlier can include duplicating (and/or replicating) a GNSS estimator (e.g., as used in S200). The GNSS estimator is preferably duplicated after the original estimator has completed an update (e.g., after S200is complete such that the duplicate copy has the updated states). However, the GNSS estimator can additionally or alternatively be duplicated before and/or during the original estimator update. The second GNSS estimator preferably receives (in addition to the inputs to the GNSS estimator or primary GNSS estimator as in S200) a fused positioning solution as determined using a fusion filter (e.g., a current receiver positioning solution, a predicted receiver positioning solution, etc.). However, the second GNSS estimator can additionally or alternatively receive sensor data, sensor error(s), and/or any suitable information or data. The fused positioning solution (e.g., fusion receiver positioning solution, fusion receiver position solution, etc.) is preferably used to constrain the second GNSS receiver positioning solution (e.g., the position, average velocity, instantaneous velocity, acceleration, higher order derivatives of position with respect to time, attitude, etc.). For instance, the fusion receiver positioning solution can be used by the second GNSS estimator to test whether a state (e.g., a positioning solution estimated from a single satellite, single satellite constellation, etc.), GNSS observation, positioning solution, and/or other information is (e.g., includes, is associated with, etc.) an outlier. However, the fused positioning solution can additionally or alternatively be used by the second GNSS estimator to determine the GNSS positioning solution and/or can otherwise be used. The fused positioning solution can be provided at the GNSS antenna (e.g., the fusion estimator can be augmented with the GNSS antenna position), at an external vehicle reference (e.g., where the second GNSS estimator can receive a vehicle reference position offset relative to the GNSS antenna), and/or the fusion receiver positioning solution can be provided relative to any suitable reference. The second GNSS estimator (and optionally the first GNSS estimator) can treat phase ambiguity as a continuous variable, can attempt to constrain the phase ambiguity to an integer. For example, detecting one or more outliers can include (e.g., after calculating a second position estimate) calculating phase measurement residuals and comparing those residuals to integer multiples of full phase cycles (e.g., 2πn). If the residual is close (e.g., differs from an integer value by at most a threshold), this may be indicative of a cycle slip, rather than an erroneous observation. In some variants, detecting an outlier can include identifying an outlier as a cycle slip. For example, when a phase measurement residual is close to (e.g., within a threshold value of) an integer multiple of a half or full phase cycle, an outlier can be identified as a cycle slip. In some variations, the use of displacements (e.g., as opposed to, in addition to, etc. absolute position) can provide a technical advantage for facilitating cycle slip detection. In another example, identifying outlier(s) as cycle slips can include verifying that the value of the cycle slip can be chosen reliably (e.g., by verifying that only a single integer cycle slip value is contained within a known window of variance around the value of the residual), and testing the cycle slip value against the residual (e.g., by verifying that the cycle slip value is within a window of variance of the residual value; where two windows of variance described here may be distinct). However, an outlier can be identified as a cycle slip in any manner. In a first specific example, a GNSS estimator can detect GNSS outliers by: generating a set of posterior observation residual covariances from the set of posterior observation residual values; calculating a set of posterior observation residual variances from the set of posterior observation residual covariances; scaling the set of posterior observation residual values using the set of posterior observation residual variances; and identifying at least one GNSS observation as a statistical outlier based on a corresponding scaled posterior observation residual value being outside a threshold range. In a variation of the first specific example (sometimes referred to as a scaled residual technique), detecting one or more outliers can include calculating posterior residual values for the satellite data observations. That is, for observations zkand posterior state estimate {circumflex over (x)}k|k(e.g., calculated in S200, fused positioning solution, GNSS positioning solution, etc.), detecting one or more outliers can include calculating the residual v~k|k=zk-Hk⁢xˆk|k where Hkis an observation model that maps the true state space into the observed space and where {tilde over (v)}k|kis sometimes additionally or alternatively referred to as the measurement post-fit residual or posterior observation residual. From the posterior observation residual, detecting one or more outliers can include determining (e.g., calculating, estimating, etc.) the posterior observation residual covariance, Ck=Rk-Hk⁢Pk|k⁢HkT where Rkis the covariance of nkand Pk|kis the updated state covariance. In this variation, the variance of the posterior observation residual vector can be determined (e.g., estimated, calculated, etc.) from the posterior observation residual covariance: σ2=vT⁢Rk-1⁢vD⁢O⁢F where DOF is degrees of freedom and where v can alternatively be written as Sz where S is an matrix having a trace equivalent to the DOF. From this, it can be said that S=I-Hk⁢Pk|k⁢HkT⁢Rk-1. This variance can be used to scale the residuals {tilde over (v)}k|k(e.g., by dividing residuals by their associated standard deviations or by their associated variances). The scaled residuals are then compared to a threshold window (e.g., one corresponding to plus or minus 3 standard deviations from the mean), and any observations falling outside the threshold window can be flagged as (probable such as with greater than a threshold probability of being) outlier observations. The threshold is preferably on the order of about 10 cm (e.g., 5-20 cm). Smaller thresholds can be more selective (e.g., more accurate positioning solution), but provide less availability and the opposite can be true for larger thresholds (e.g., less accurate positioning solution but greater availability). However any suitable threshold(s) can be used (e.g., less than 10 cm, greater than 10 cm, depending on accuracy needs of an application, depending on availability needs of an application, etc.). In a second specific example, a GNSS estimator can detect GNSS outliers by: generating a set of posterior observation residual covariances from the set of posterior observation residual values; calculating a set of posterior observation residual variances from the set of posterior observation residual covariances; identifying a presence of statistical outliers in the set of satellite positioning observations based on a first number of the set of posterior observation residual variances being outside a threshold range; generating a first reduced set of satellite positioning observations by removing a first subset of the set of satellite positioning observations; recalculating the set of posterior observation residual variances using the first reduced set of satellite positioning observations; determining that a number of the set of recalculated posterior observation residuals outside the threshold range is lower than the first number; and in response to this determination, identifying a subset of the GNSS observations as statistical outliers. In a variation of the second specific example (sometimes referred to as a variance threshold technique), the posterior residual, posterior residual covariance, and posterior residual variance can be calculated as in the first specific example (and/or variations thereof). This variation can be particularly (but not exclusively) useful for differenced measurements (and as such can be implemented with or benefit from using displacements in addition to or alternative to positioning solutions) as differenced measurements can be correlated, and thus more likely to result in an outlier in one observation corrupting residuals that correspond to different observations. In this variation, the posterior residual variances can be examined (e.g., compared to a threshold) directly. If one or more posterior residual variances is outside of a threshold range (e.g., one corresponding to plus or minus 1, 2, 3, 5, etc. standard deviations from the mean), this can be an indication that one or more outliers may be present in the observation data. The threshold is preferably on the order of about 10 cm (e.g., 5-20 cm). However any suitable threshold(s) can be used (e.g., less than 10 cm, greater than 10 cm, depending on accuracy needs of an application, depending on availability needs of an application, etc.). In this variation of the second example, detecting one or more outliers can include removing a set of observations and recalculating the posterior residual variances. When the posterior residual variances fall below threshold levels, the algorithm can stop here. However, additionally or alternatively, the algorithm can try removing a different set of observations (and so on, until at least one or more of them falls below threshold levels, to find the highest quality posterior residual variances, etc.). Alternatively stated, the algorithm can continue until the number of posterior residual variances outside of a threshold range is less than a threshold number. Alternatively, in this variation, detecting one or more outliers can include calculating posterior residual variances for a number of set-reduced observations (i.e., different subsets of the whole set of observations) and choosing the reduced set with the lowest variance. In a third specific example, a GNSS estimator can detect GNSS outliers using a combination of the first and second specific example of detecting GNSS outliers. In a variation of the third specific example, detecting outliers in a manner similar to the second specific example of outlier detection when a number of posterior residual variances outside of a threshold range is greater than (or equal to) a threshold number and detecting outliers in a manner similar to the first specific example when the number of posterior residual variances outside of a threshold range is less than or equal to the threshold number. In a second variation of the third specific example, detecting outliers can be performed in a manner similar to the first specific example of outlier detection when a number of posterior residual variances outside of a threshold range is greater than (or equal to) a threshold number and detecting outliers can be performed in a manner similar to the second specific example when the number of posterior residual variances outside of a threshold range is less than or equal to the threshold number. In a variation of the third specific example (sometimes referred to as a hybrid technique), the posterior residual, posterior residual covariance, and posterior residual variance can be determined (e.g., estimated, calculated, etc.) as in the scaled residual technique. The posterior residual variances can be examined. When one or more posterior residual variances is above a threshold (e.g., the same or a different threshold than the one mentioned in the variance threshold technique), outlier(s) can be detected using a variance threshold technique. When one or more posterior residual variances is less than or equal to the threshold, the outlier(s) can be this variation can select between the variance threshold and scaled residual techniques in any manner (e.g., based on the number of above-threshold or below-threshold posterior residual variances, a magnitude of the posterior residual variances, a posterior residual covariance threshold, a posterior residual magnitude, etc.). The threshold is preferably on the order of about 10 cm (e.g., 5-20 cm). However any suitable threshold(s) can be used (e.g., less than 10 cm, greater than 10 cm, depending on accuracy needs of an application, depending on availability needs of an application, etc.). In a fourth specific example, outliers can be detected using a scaled residual technique, a variance threshold technique, a hybrid technique, and/or any suitable technique or combination of techniques such as those disclosed in U.S. patent application Ser. No. 16/748,517 titled ‘SYSTEMS AND METHODS FOR REDUCED-OUTLIER SATELLITE POSITIONING’ filed 21 Jan. 2020 incorporated in its entirety by this reference. However, outliers can be detected based on a measurement innovation and/or can otherwise be detected. Mitigating an effect of the outlier(s) S400functions to reduce or remove an impact of outliers (e.g., detected outliers) on an updated receiver positioning solution (e.g., as calculated in S500, future estimated receiver positioning solutions, etc.). S400can be performed by an outlier mitigator, a computing system, a receiver, and/or by any suitable component. All outliers (e.g., outliers detected in S300) can be mitigated, a subset of outliers can be mitigated (e.g., a most egregious outlier can be mitigated and then satellite observations can be reevaluated for whether they are outliers, outliers beyond a mitigation threshold can be mitigated, etc.), and/or any suitable outliers can be mitigated. Outlier(s) are preferably detected in the GNSS observations, but can additionally or alternatively be detected in the sensor data and/or in any suitable dataset(s). Mitigating the effect of outliers can include: removing one or more GNSS observations from the set of GNSS observations, applying a weight factor to one or more GNSS observations (e.g., apply a smaller weight factor to outliers, apply a larger weight factor to inliers, etc.), acquiring additional GNSS observations, repairing (e.g., correcting) a cycle slip (e.g., when S300identifies, verifies, etc. outlier(s) as probable cycle slip(s)), adding new observations (e.g., synthetic satellite observations) with negative variances as updates to the positioning solution estimate (the new observations serving to remove the effects of detected outlier observations), and/or any suitable mitigation steps. Mitigating the effect of outliers can be performed once, a predetermined number of times, until a criterion is met, iteratively with S300(e.g., detecting whether any outliers are present in the dataset and mitigating the outliers before repeating an outlier detection to see if the mitigation removes the outliers), and/or with any suitable frequency, period, and/or repetition rate. S400and S300are preferably performed iteratively until all outliers have been mitigated (e.g., removed from the dataset, weighted such that the previously identified outliers are no longer detected as outliers, as shown for example inFIG.4, etc.). However, S300and S400can be performed iteratively until: the dataset includes GNSS observations from at most a threshold number of satellites (e.g., GNSS observations only associated with 1, 2, 3, 4, 5, 10, etc. unique satellites), a threshold fraction of satellite observations are outliers, the dataset includes GNSS observations from at most a threshold number of satellite constellations, a subset of outliers have been mitigated (e.g., outliers with a greatest difference relative from inliers, outliers with a large impact on the receiver positioning solution, outliers with a large covariance, etc.), for a target number of iterations, until a threshold residual is achieved, until a change in a positioning solution changes by at most a threshold amount, and/or until any suitable iteration criteria is achieved. The GNSS estimator (e.g., outlier detector associated therewith, outlier mitigator associated therewith, memory, cache, etc.) used to detect the outliers (e.g., the secondary GNSS estimator, duplicate GNSS estimator, etc.) preferably retains (e.g., stores) the GNSS observations that are identified as outliers along with any mitigation process(es) applied thereto. However, the GNSS estimator can additionally or alternatively transmit the outliers (and/or mitigation effects) to the primary GNSS estimator (e.g., as outliers are detected) and/or fusion estimator, and/or can otherwise provide information regarding outliers to other endpoints. After outliers are detected (e.g., iterations are complete), the outliers (and/or associated mitigation processes) can be transmitted to the primary GNSS estimator. However, additionally or alternatively, one or more states of the second GNSS estimator can be transmitted (e.g., to avoid updating the primary GNSS estimator, for instance a second GNSS receiver positioning solution can be transmitted), and/or any suitable data or information can be transmitted to the GNSS estimator, fusion estimator, and/or other suitable endpoint(s). Determining the updated receiver positioning solution S500functions to determine the GNSS receiver position using outlier mitigated data. S500can be performed in the same and/or a different manner from S200. The updated receiver positioning solution can have a higher availability, greater accuracy, better integrity, and/or otherwise be related to the receiver positioning solution (e.g., from S200). As an illustrative example, a filter (e.g., a positioning engine filter, GNSS estimator, fusion engine, etc.) can be updated to remove or update states associated with satellite observations that were identified as outliers. S500is preferably performed when at most a threshold fraction (e.g., percentage) of the total satellite observations are identified as outliers (e.g., probable outliers). In a specific example, the threshold percentage can be about 50% (i.e., when 50% or fewer of the total satellite observations are identified as outliers, S500can be performed). However, the threshold percentage can additionally or alternatively be 10%, 20%, 25%, 30%, 33%, 40%, 60%, 70%, 75%, 80%, 90%, 95%, and/or any suitable value or range therebetween. However, S500can be performed when any fraction of the satellite observations has been identified as outliers. Another technical advantage conferred by S300and/or S400on the performance of S500in some variants is that the set out outlier mitigated satellite observations can include satellite observations associated with fewer than 4 satellites (e.g., because the data quality of those satellite observations can be assured with sufficient accuracy). The use of satellite observations associated with fewer than 4 satellites can be beneficial for improving an availability of a satellite positioning solution in a noisy environment (e.g., in a dense urban environment). However, satellite observations associated with greater than 4 satellites can be used. In variants, the receiver positioning solution (e.g., the original receiver positioning solution, the updated receiver positioning solution, an intermediate receiver positioning solution, etc.) can be transmitted to an external system, stored (e.g., cached), used to operate and/or control an external system, used to generate operation instructions for an external system (e.g., using a GNSS receiver computing system, an external system computing system, etc.), and/or used in any manner. The methods of the preferred embodiment and variations thereof can be embodied and/or implemented at least in part as a machine configured to receive a computer-readable medium storing computer-readable instructions. The instructions are preferably executed by computer-executable components integrated with a system for GNSS PVT generation. The computer-readable medium can be stored on any suitable computer-readable media such as RAMs, ROMs, flash memory, EEPROMs, optical devices (CD or DVD), hard drives, floppy drives, or any suitable device. The computer-executable component is preferably a general or application specific processor, but any suitable dedicated hardware or hardware/firmware combination device can alternatively or additionally execute the instructions. Embodiments of the system and/or method can include every combination and permutation of the various system components and the various method processes, wherein one or more instances of the method and/or processes described herein can be performed asynchronously (e.g., sequentially), concurrently (e.g., in parallel), or in any other suitable order by and/or using one or more instances of the systems, elements, and/or entities described herein. As a person skilled in the art will recognize from the previous detailed description and from the figures and claims, modifications and changes can be made to the preferred embodiments of the invention without departing from the scope of this invention defined in the following claims.
51,448
11860288
The figures are not to scale. In general, the same reference numbers will be used throughout the drawing(s) and accompanying written description to refer to the same or like parts. As used herein, connection references (e.g., attached, coupled, connected, and joined) may include intermediate members between the elements referenced by the connection reference and/or relative movement between those elements unless otherwise indicated. As such, connection references do not necessarily infer that two elements are directly connected and/or in fixed relation to each other. As used herein, stating that any part is in “contact” with another part is defined to mean that there is no intermediate part between the two parts. Unless specifically stated otherwise, descriptors such as “first,” “second,” “third,” etc. are used herein without imputing or otherwise indicating any meaning of priority, physical order, arrangement in a list, and/or ordering in any way, but are merely used as labels and/or arbitrary names to distinguish elements for ease of understanding the disclosed examples. In some examples, the descriptor “first” may be used to refer to an element in the detailed description, while the same element may be referred to in a claim with a different descriptor such as “second” or “third.” In such instances, it should be understood that such descriptors are used merely for identifying those elements distinctly that might, for example, otherwise share a same name. As used herein, “approximately” and “about” refer to measurements that may not be exact due to manufacturing tolerances and/or other real world imperfections. As used herein “substantially real time” refers to occurrence in a near instantaneous manner recognizing there may be real world delays for computing time, transmission, etc. Thus, unless otherwise specified, “substantially real time” refers to real time +/−1 second. DETAILED DESCRIPTION As computing devices (e.g., laptops, tablets, smartphones, desktop computers, etc.) are adapting to the new smart office and home environments, spatial location of audio sources is an important feature. Identifying the location of a sound source can be used to determine whether the sound comes from a user of a computing device, from some interference source, or from some other source that could be used for context awareness. Sound source location detection also enables the use of different types of audio enhancement techniques that either seek to isolate and/or focus on a sound when relevant to the operation of a computing device (e.g., the sound of a user speaking) or to seek to reduce and/or cancel out the sound when not relevant to the operation of the computing device (e.g., interfering background noise). More particularly, sound source location detection can be used to enhance the selected audio source for speech recognition, speech separation, speaker identification, audio event detection, etc. Many such applications depend upon audio location detection to occur in substantially real-time. One approach to detect the location of a sound source is through the use of an array of audio sensors (e.g., microphones). Due to the spacing of the microphones in an array, different ones of the microphones will capture a sound at slightly different times based on the distance the sound must travel from the source to the different microphones. This delay in different microphones registering a particular sound is commonly referred to as the time difference of arrival (TDOA) of the sound. Based on principles of triangulation and the fixed spacing of the microphones, it is possible to use the TDOA of a sound to determine the location of the source of the sound relative to the microphones and, thus, relative to the computing device containing the microphones. High quality sound location detection is often handled with relatively large arrays of microphone containing 4 or more elements. The relatively large number of microphones enables different microphones to face in different directions so as to reliably capture sound originating from any direction. Furthermore, increasing the number of microphones in an array increases the precision and accuracy with which audio source locations may be identified. However, there is a cost associated with increasing the number of microphones in a computing device due to an increased bill of materials (for the additional microphones). Further, processing the multiple audio channels associated with the different microphones often involves computationally intensive cross-correlations based on fast Fourier transform (FFT) calculations and/or deep learning algorithms. Thus, increasing the number of microphones also adds to the computing overhead needed to process the increased number of audio channels and/or may call for a dedicated digital signal processor (DSP), thereby, further adding to the bill of materials. Many existing computing devices include only two microphones. However, the microphones are not typically used for purposes of sound source location detection. Rather, two microphones are implemented to facilitate the reduction and/or cancellation of background noise. Frequently, such noise reduction is accomplished through orthogonal beamforming directed towards the typical location of the user (e.g., in front of the computing device). The reason sound source location detection is not typically implemented using only two microphones is because the triangulation of the TDOA of sound captured by two microphones cannot be resolved to a single location. Rather, assuming sound sources are in a plane that is parallel to a line extending between the two microphones, there are typically two possible source locations (that are geometric complements of one another) that will have the same TDOA for a sound. Sound sources may not necessarily be in a plane that is parallel to a line extending between the microphones, even if the source is stationary because the orientation of computing device may change. As a result, when three-dimensional space is taken into account, there may be more than two possible source locations corresponding to a particular TDOA of sound and the determination of such is dependent on the orientation of the computing device. In practical terms, while using triangulation based on TDOA analysis is possible with only two microphones, implementations of such are limited to only 180° of the surrounding environment (e.g., only in front of the computing device or only behind the computing device) and limited to when the computing device is in a particular orientation. Examples disclosed herein enable the determination of the direction or location of a source of sound across 360° of space surrounding a computing device based on feedback from two microphones without additional feedback from additional microphones. Thus, some examples are able to distinguish between a first source of sound located in from of the computing device and a second source of sound located behind the computing device, even when the sources are at complementary geometric positions relative to the two-microphones. Specifically, unlike existing approaches that rely on triangulation of TDOA measures, examples disclosed herein determine the location or direction of a source of a sound through the use of artificial intelligence. Artificial intelligence (AI), including machine learning (ML), deep learning (DL), and/or other artificial machine-driven logic, enables machines (e.g., computers, logic circuits, etc.) to use a model to process input data to generate an output based on patterns and/or associations previously learned by the model via a training process. For instance, the model may be trained with data to recognize patterns and/or associations and follow such patterns and/or associations when processing input data such that other input(s) result in output(s) consistent with the recognized patterns and/or associations. Many different types of machine learning models and/or machine learning architectures exist. In some examples, a shallow neural network model is used. As used herein, a shallow neural network means a neural network that has no more than one hidden layer. This is distinguishable from a deep neural network, which has many hidden layers. Although examples disclosed herein may be implemented with a deep neural network, using a shallow neural network model is advantageous because it requires less computational capacity, thereby increasing efficiency relative to deep neural network solutions. Furthermore, testing has shown that example shallow neural network models disclosed herein can estimate the direction of the source of a sound with greater than 95% accuracy. Thus, examples disclosed herein avoid the cost of additional components (e.g., additional microphones) and the associated burdens on computational overhead that is present in known sound source location detection systems. In general, machine learning models/architectures that are suitable to use in the example approaches disclosed herein will be any type of classifier capable of classifying a particular input (e.g., feedback from two microphones) into many (e.g., more than two) outputs. More particularly, in some examples, the neural network is defined to have a particular number of outputs corresponding to different angular intervals of 360° space. For instance, 12 different outputs may correspond to 12 different classifications associated with 30 intervals to fully cover 360° of rotation. Thus, the higher the number of outputs for the neural network, the greater the resolution of the location detection. However, increasing the number of outputs also increases the size and/or complexity of the neural network and, thus, the associated memory and/or processing requirements. In some examples, the neural network is defined to have a no-audio output separate from the outputs associated with particular angles (e.g., angular intervals) across 360° space to account for inputs that either cannot otherwise be classified and/or that correspond to moments when no sound is detected. In general, implementing a ML/AI system involves two phases: a learning/training phase and an inference phase. In the learning/training phase, a training algorithm is used to train a model to operate in accordance with patterns and/or associations based on, for example, training data. In general, the model includes internal parameters that guide how input data is transformed into output data, such as through a series of nodes and connections within the model to transform input data into output data. Additionally, hyperparameters are used as part of the training process to control how the learning is performed (e.g., a learning rate, a number of layers to be used in the machine learning model, etc.). Hyperparameters are defined to be training parameters that are determined prior to initiating the training process. Different types of training may be performed based on the type of ML/AI model and/or the expected output. For example, supervised training uses inputs and corresponding expected (e.g., labeled) outputs to select parameters (e.g., by iterating over combinations of select parameters) for the ML/AI model that reduce model error. As used herein, labelling refers to an expected output of the machine learning model (e.g., a classification, an expected output value, etc.). Alternatively, unsupervised training (e.g., used in deep learning, a subset of machine learning, etc.) involves inferring patterns from inputs to select parameters for the ML/AI model (e.g., without the benefit of expected (e.g., labeled) outputs). In examples disclosed herein, ML/AI models are trained using stochastic gradient descent in a supervised manner. However, any other training algorithm may additionally or alternatively be used. Training is accomplished using a training set of feedback signals from two microphones capturing sounds generated at known angular positions relative to the microphones (and the associated computing device). The known positions of the feedback signals are used to label the training dataset so that the neural network can compare outputs (e.g., estimated locations of the sound source) to the ground truth (e.g., known positions of the sound source) and adjust the model accordingly. In examples disclosed herein, training is performed until an acceptable amount of error is achieved (e.g., less than 5% error). Training is performed using hyperparameters that control how the learning is performed (e.g., a learning rate, a number of layers to be used in the machine learning model, etc.). In some examples, the training data is sub-divided into a first set of data for training the machine learning model and a second set of data for validating the machine learning model. In some examples, the training data may also be divided into a third set of data for testing the machine learning model. In some examples, the training process may be performed multiple times to develop multiple different models specific to different orientations of a computing device. Additionally or alternatively, in some examples, the training dataset may include feedback signals from the two microphones associated with different orientations of the computing device such that a single machine learning model may be trained to determine the location of a sound source despite changes to the orientation of the computing device. Once training is complete, the model is deployed for use as an executable construct that processes an input and provides an output based on the network of nodes and connections defined in the model. In some examples, the model is stored locally on the computing device that is to implement or execute the model. Once trained, the deployed model may be operated in an inference phase to process data. In the inference phase, data to be analyzed (e.g., live data) is input to the model, and the model executes to create an output. This inference phase can be thought of as the AI “thinking” to generate the output based on what it learned from the training (e.g., by executing the model to apply the learned patterns and/or associations to the live data). In some examples, input data undergoes pre-processing before being used as an input to the machine learning model. Moreover, in some examples, the output data may undergo post-processing after it is generated by the AI model to transform the output into a useful result (e.g., a display of data, an instruction to be executed by a machine, etc.). More particularly, in examples disclosed herein, a determination of a location or direction of a source of sound may be used to generate or implement a response including determining a context of a computing device, adjusting the audio processing of feedback signals generated by the microphones to either isolate and/or focus on a detected sounds or to reduce and/or cancel out the detected sounds, identify spoken words through speech recognition, identify a speaker through voice recognition, detect a particular audio event associated with the captured sound, etc. FIG.1illustrates an example computing device100constructed in accordance with teachings disclosed herein. In this example, the computing device100corresponds to a laptop with a base102and a lid104. However, the computing devices100may correspond to any suitable type of computing device (e.g., a tablet, a smartphone, a desktop computer, etc.). The computing device100includes two spaced apart microphones106,108. In the illustrated example, the microphones106,108are positioned on a front surface110of the lid104to face in a same direction as a screen112of the computing device100(e.g., a user facing direction). For purposes of explanation,FIG.1also shows a number of azimuthal angles114-136distributed across 360 degrees of space surrounding the computing device100. As shown inFIG.1, the first angle114points to the left of the computing device100(from the perspective of a user facing the front (e.g., the screen112) of the device) and is designated as the 0° position. Each successive angle114-136corresponds to an increment of 30° around the computing device100in a counter-clockwise direction beginning form the 0° position. Thus, in the illustrated example, the fourth angle120corresponds to the 90° position and is directly in front of the computing device100, the seventh angle126corresponds to the 180° position and is to the right of the computing device (directly opposite the 0° position), and the tenth angle132corresponds to the 270° position and is directly behind the computing device100(directly opposite the 90° position). The particular positions of the angles114-136relative to the computing device100shown inFIG.1are for purposes of explanation only. Thus, the 0° position114may be defined in any direction relative to the computing device and the other angles may extend from there in either a clockwise or counter-clockwise direction. Furthermore, in some examples, the angular interval between adjacent ones of the angles114-136may be greater or smaller than 30° to designate a different number of angles that may be less than or more than the 12 angles shown inFIG.1. In the illustrated example, the computing device100includes an example sound source location analyzer138the is communicatively coupled with the microphones106,108to receive and process audio feedback signals generated by the microphones106,108. More particularly, in some examples, the sound source location analyzer138implements a machine learning model that uses the feedback signals as inputs to generate an output identifying the particular angle114-136that corresponds to the direction associated with a source of a sound represented in the feedback signals. In some examples, the feedback signals generated by the microphones106,108undergo pre-processing before they are analyzed using a neural network based on an associated machine learning model. More particularly, in some examples, the sound source location analyzer138calculates a vector corresponding to the cross-correlation of the two audio signals and the cross-correlation vector is used as the basis for the input to the machine learning model. In some examples, the vector corresponds to a generalized cross-correlation with phase transform (GCC-PHAT). GCC-PHAT analysis is commonly used to determine the time difference of arrival (TDOA) between two audio signals when the signals contain relatively little autocorrelation (e.g. relatively low reverberation, relatively rich frequency content sounds). Furthermore, GCC-PHAT analysis can make a system relatively robust to a certain amount of reverberation. The analysis of two audio signals corresponding to a sound captured at slightly different times using GCC-PHAT in the time-domain generates a cross-correlation vector that typically includes a single peak or spike that is isolated to determine the delay or time difference between the two signals. In a typical TDOA analysis, this time delay is then used with triangulation calculations to determine a location or direction of the source of the sound. Unlike traditional TDOA analysis that isolates and focuses on the peak value in the cross-correlation vector produced through GCC-PHAT analysis, examples disclosed herein use a plurality of values in the cross-correlation vector in addition to the peak value. More particularly, in some examples, a segment of values in the cross-correlation vector surrounding and including the peak value are identified and used as inputs to a machine learning model. This is diagrammatically illustrated inFIG.2. As shown in the illustrated example, two separate signals202,204are received on two separate channels corresponding to the two microphones106,108. As represented in the upper left corner ofFIG.2, the first signal202is slightly delayed relative to the second signal204. As time advances, individual signal frames or audio blocks206,208containing a particular number of samples of the audio signals202,204(e.g., two particular time-domain vectors) are compared through a GCC-PHAT process to generate a cross-correlation vector210. As shown in the illustrated example, the cross-correlation vector210includes a peak value212corresponding to a spike that is typically easily identifiable because most other values in the vector are at or near zero. However, as noted above, rather than merely isolating the peak value212, examples disclosed herein identify a segment or portion214of the cross-correlation vector210. The segment214is then provided as an input to a machine learning model executed by a location analyzer to determine the location or direction of the source of the sound captured by the microphones106,108at the particular point in time corresponding to the signal frames206,208. In some examples, this process iterates across time such that new signal frames are analyzed to generate a new cross-correlation vectors from which a segment is isolated to feed into the machine learning model. The plot216in the bottom right corner ofFIG.2represents a time series of multiple vector segments214corresponding to portions of successive cross-correlation vectors210calculated for different signal frames206,208along a 10 second period of time aligned with the corresponding time period for the signals (shown in the upper right corner ofFIG.2). The light colored regions218in the plot216correspond to the peak value212included within the segments214of each successive cross-correlation vector210(represented vertically in the plot216) in the time series. As can be seen in the illustrated example, the position of the peak value212is at different locations at different points in time indicative of different locations of the source of sound relative to the computing device100. That is, the change in position of the peak value212within the vector210(and, thus, in the corresponding segment214of the vector) indicates a difference in the time of arrival of the underlying sounds, which can be used to predict a location or direction of the source of the sound. However, the source of a sound is ambiguous when there are only two microphones106,108because multiple locations for the sound source are possible. This is the reason that examples disclosed herein consider all values in the segment214of the cross-correlation vector210rather than merely the position of the peak value212. More particularly, the non-peak values in the segment214contain frequency-relevant information associated with the audio signals that can be used to reliably discriminate between signals coming from different but complementary locations relative to the microphones. That is, while a sound from two different but complementary locations may result in a cross-correlation vector210with the peak value212at the same position (due to the same TDOA for both locations), differences in the frequency information contained in the signals provided by each microphone can nevertheless enable sounds from one location to be distinguished from the other location. However, such information can only be analyzed if it is retained in the segment214that is provided as an input to the machine learning model rather than only considering the position of the peak value212as is done in traditional TDOA analysis. In some examples, as shown in the illustrated example ofFIG.2, the segment214is a truncated mid-section of the cross-correlation vector210. That is, in some examples, the segment214is centered in the cross-correlation vector210with the same and/or approximately the same number of values excluded from the cross-correlation vector210on either side of the segment214. The proportion of the cross-correlation vector210included in the segment214may be any suitable proportion of the vector210. In some examples, the segment214may include the entirety of the cross-correlation vector210. However, to increase efficiency by reducing the amount of data to be processed (e.g., reducing computational overhead), the segment may include significantly less than all values in the cross-correlation vector210(e.g., less than 50%, less than 25%, less than 20%, less than 10%, etc.). On the other hand, the segment214is selected to include a sufficient proportion of the cross-correlation data (e.g., at least 1%, at least 5%, at least 10%, etc.) to enable the machine learning model to produce reliable estimates of the location of a sound source (e.g., to be able to discriminate between complementary locations associated with the same TDOA). In some examples, the number of elements in the cross-correlation vector210included in the segment214includes at least twice the number of samples corresponding to the maximum expected time delay between the two microphones106,108detecting a particular sound. The maximum expected time delay for a sound to be detected by the two microphones106,108is likely to be at extreme angles where the source of the sound is located to the side at a point that is collinear with both microphones106,108. Thus, in the illustrated example ofFIG.1, the extreme angles where the maximum expected delay would occur correspond to the first angle114(e.g., the 0° position) and the seventh angle126(e.g., the 180° position). Including at least twice the number of samples from the cross-correlation vector210than the number of samples corresponding to the maximum expected delay would ensure that the segment214includes values surrounding the peak value212regardless of the position of the peak value212within the cross-correlation vector210. Differences in the frequency information of the two audio signals generated by the two microphones106,108for sounds originating from complementary sources may arise from the physical arrangement of the microphones106,108in the computing device100relative to other components in the computing device100and how the microphones106,108and other components are positioned relative to the sources of sound. For instance, in the illustrated example ofFIG.1, the angles114-136are in a plane generally parallel with the base102of the computing device100with the lid104in an upright or open position in which the lid extends generally transverse to the plane of the angles114-136and the base102. Further, in the illustrated example, the microphones106,108are spaced apart horizontally along the lid104such that the microphones106,108are arranged in a line that is substantially parallel to the plane of the angles114,136regardless of the particular orientation of the lid104relative to the base102. While the alignment of the microphones106,108remains parallel to the plane of the angles114-136regardless of the orientation of the lid104, the spatial relationship of the microphones106,108relative to the surrounding area still changes as the orientation of the lid104changes. For example, when the lid104is in the generally upright or open position, shown in the illustrated example, the microphones106,108are oriented to face toward the front of the computing device100(e.g., in a user facing direction toward the 90° position). In such a position, there is an unobstructed path towards the microphones106,108from locations associated with angles in a front 180 degree region that include the second through sixth angles116-124. Further, when the lid104is in the upright or open position shown inFIG.1, the backside or top side of the lid104obstructs a direct path toward the microphones106,108from locations associated with angles in a back 180 degree region that include the eighth through twelfth angles118-136. As a result, sound originating from behind the computing device100may have a slightly different frequency signature than sound originating from in front of the computing device100even if the two locations are complementary to result in the same TDOA between the two microphones. These differences in frequency signatures may be reflected in the segment214of the cross-correlation vector210that serves as an input to the machine learning model, thereby enabling the system to distinguish between the a sound source located in a direction anywhere across the front 180 degree region in front of the computing device100from a sound source located in a direction anywhere across the back 180 degree region behind the computing device100. Notably, if the lid104was extended further back than what is shown inFIG.1until the lid104is substantially parallel with the base102and the plane of the angles114-136, the microphones106,108would face upwards in a direction transverse to the plane of the angles114-136rather than forwards towards the 90° position. In this upward facing position, there is an unobstructed path towards the microphones106,108from all directions across the full 360° space surrounding the computing device100(assuming the locations are above the plane defined by the front surface110of the lid104where the microphones106,108are located). As such, sounds originating from the back 180 degree region are no longer obstructed by the backside of the lid104. While this may alter the frequency signature of such sounds as detected by the microphones106,104relative to when the lid104is in the upright position shown inFIG.1, in some examples, the sound source location analyzer138may nevertheless still be able to distinguish the location of sounds across the full 360 degree area surrounding the computing device100. However, in some examples, different machine learning models may be used to account for the different orientations of the computing device100. FIG.3is a block diagram of the example sound source location analyzer138ofFIG.1. In the illustrated example, the sound source location analyzer138includes an example audio sensor interface302, an example audio signal analyzer304, an example cross-correlation analyzer306, an example location analyzer308, an example model data store310, an example orientation analyzer312, and an example response generator314. The example audio sensor interface302receives audio feedback signals from the microphones106,108and provides the same to the example audio signal analyzer304. The example audio signal analyzer304may pre-process the audio signals from the microphone106,108. In some examples, the audio signal analyzer304identifies and/or isolates individual signal frames or audio blocks (e.g., the signal frames206,208) for analysis on an ongoing basis. In some examples, successive ones of the signal frames overlap in time. That is, the ending of one signal frame may occur after the beginning of a subsequent signal frame. In other examples, the ending of one signal frame corresponds to the beginning of a subsequent signal frame. In some examples, the audio signals are stored in memory (e.g., a buffer) to enable the isolation and/or extraction of the different signal frames over time. In some examples, the signal frames206,208are defined to correspond to relative short periods of time (e.g., less than 100 microseconds) to enable ongoing and substantially real-time analysis of sounds captured by the microphones106,108. Further, in some examples, the audio signal analyzer304normalizes the signal frames206,208to reduce (e.g., eliminate) the effect of each microphone106,108having a slightly different gain than the other. The example cross-correlation analyzer306takes the pre-processed time-domain vectors corresponding to the two signal frames206,208and calculates a corresponding cross-correlation vector (e.g., the cross-correlation vector210). More particularly, in some examples, the vector corresponds to the output of a GCC-PHAT analysis of the signal frames206,208. Further, in some examples, the example cross-correlation analyzer306identifies and/or isolates a particular segment214of the cross-correlation vector210. In some examples, the segment214corresponds to a fixed proportion of a mid-section of the cross-correlation vector210that includes at least a threshold number of the values of the full cross-correlation vector210(e.g., at least twice the number of samples corresponding to the maximum expected time delay between the two microphones106,108detecting a particular sound). In some examples, the size of the segment214may be adjusted to strike an appropriate balance between accuracy of the sound source location detection and computational efficiency. The example location analyzer308implements or executes a machine learning model (e.g., a trained neural network) to analyze the segment214of the cross-correlation vector210to estimate or determine a location or direction of a source of sound corresponding to the audio signals used in generating the cross-correlation vector210. In some examples, the machine learning model is stored in the example model data store310. In some examples, multiple different machine learning models may be maintained in the model data store310. In some such examples, the location analyzer308determines which of the models to use based on orientation information provided by the orientation analyzer312. That is, in some examples, the example orientation analyzer312determines the orientation of the computing device100based on feedback from one or more orientation sensors of the computing device100(e.g., an accelerometer, a gyroscope, a magnetometer, a laptop hinge sensor, etc.). More particularly, in the case of a laptop, the orientation analyzer312may determine the orientation of the lid104of the computing device100relative to the base102and/or relative to a surrounding environment. Based on the orientation of the computing device100, the example location analyzer308may select a particular machine learning model to execute. In some examples, a single machine learning model may account for multiple different orientations of the computing device such that the orientation analyzer312is unnecessary. In other examples, a machine learning model that accounts for multiple different orientations may use orientation information generated by the orientation analyzer312as an input. In some examples, if a particular orientation is determined to make the outputs of a machine learning model unreliable, the location analyzer308may determine to suppress the execution of the machine learning model and/or to flag the output of the machine learning model as suspect. In some examples, the output of the machine learning model identifies a particular angle or direction corresponding to the estimated location or direction of the source of a detected sound. The example response generator314generates and/or implements a response based on the output (e.g., the angle or direction) of the machine learning model. In some examples, the response may be a signal or trigger that activates another component of the computing device100to implement further action. In some examples, the response corresponds to speech recognition and/or voice recognition of a person speaking within detection range of the microphones106,108. In some examples, the response may include determining a context of the computing device100and/or trigger a particular action in response to the determined context. In some examples, the response may include adjusting the audio processing of feedback signals generated by the microphones to either isolate and/or focus on a detected sounds when relevant to the operation of the computing device100(e.g., a user speaking) or to reduce and/or cancel out the detected sounds when not relevant to the operation of the computing device (e.g., interfering background noise). For example, if the detected sound is determined to come from a direction that is within a threshold angle (e.g., 15°, 30°, 45°, 60°, etc.) of an expected user direction (e.g., in front of the computing device100at the 90° position inFIG.1), the sound may be assumed to come from a user of the computing device100such that the response includes identifying the sound as originating from the user and/or isolating the sound for further processing (e.g., to perform speech and/or voice recognition analysis). On the other hand, if the detected sound is determined to come from a direction that is outside the threshold angle of an expected user direction (e.g., in front of the computing device100at the 90° position inFIG.1), the sound may be assumed to be background noise such that the response includes disregarding the sound and/or reducing and/or cancelling out noise associated with the sound. While an example manner of implementing the sound source location analyzer138ofFIG.1is illustrated inFIG.3, one or more of the elements, processes and/or devices illustrated inFIG.3may be combined, divided, re-arranged, omitted, eliminated and/or implemented in any other way. Further, the example audio sensor interface302, the example audio signal analyzer304, the example cross-correlation analyzer306, the example location analyzer308, the example model data store310, the example orientation analyzer312, the example response generator314and/or, more generally, the example sound source location analyzer138ofFIG.3may be implemented by hardware, software, firmware and/or any combination of hardware, software and/or firmware. Thus, for example, any of the example audio sensor interface302, the example audio signal analyzer304, the example cross-correlation analyzer306, the example location analyzer308, the example model data store310, the example orientation analyzer312, the example response generator314and/or, more generally, the example sound source location analyzer138could be implemented by one or more analog or digital circuit(s), logic circuits, programmable processor(s), programmable controller(s), graphics processing unit(s) (GPU(s)), digital signal processor(s) (DSP(s)), application specific integrated circuit(s) (ASIC(s)), programmable logic device(s) (PLD(s)) and/or field programmable logic device(s) (FPLD(s)). When reading any of the apparatus or system claims of this patent to cover a purely software and/or firmware implementation, at least one of the example audio sensor interface302, the example audio signal analyzer304, the example cross-correlation analyzer306, the example location analyzer308, the example model data store310, the example orientation analyzer312, and/or the example response generator314is/are hereby expressly defined to include a non-transitory computer readable storage device or storage disk such as a memory, a digital versatile disk (DVD), a compact disk (CD), a Blu-ray disk, etc. including the software and/or firmware. Further still, the example sound source location analyzer138ofFIG.1may include one or more elements, processes and/or devices in addition to, or instead of, those illustrated inFIG.3, and/or may include more than one of any or all of the illustrated elements, processes and devices. As used herein, the phrase “in communication,” including variations thereof, encompasses direct communication and/or indirect communication through one or more intermediary components, and does not require direct physical (e.g., wired) communication and/or constant communication, but rather additionally includes selective communication at periodic intervals, scheduled intervals, aperiodic intervals, and/or one-time events. A flowchart representative of example hardware logic, machine readable instructions, hardware implemented state machines, and/or any combination thereof for implementing the sound source location analyzer138ofFIGS.1and/or3is shown inFIG.4. The machine readable instructions may be one or more executable programs or portion(s) of an executable program for execution by a computer processor and/or processor circuitry, such as the processor712shown in the example processor platform700discussed below in connection withFIG.7. The program may be embodied in software stored on a non-transitory computer readable storage medium such as a CD-ROM, a floppy disk, a hard drive, a DVD, a Blu-ray disk, or a memory associated with the processor712, but the entire program and/or parts thereof could alternatively be executed by a device other than the processor712and/or embodied in firmware or dedicated hardware. Further, although the example program is described with reference to the flowchart illustrated inFIG.4to cause a machine to implement the operations outlined in the flowchart, many other methods of implementing the example sound source location analyzer138may alternatively be used. For example, the order of execution of the blocks may be changed, and/or some of the blocks described may be changed, eliminated, or combined. Additionally or alternatively, any or all of the blocks may be implemented by one or more hardware circuits (e.g., discrete and/or integrated analog and/or digital circuitry, an FPGA, an ASIC, a comparator, an operational-amplifier (op-amp), a logic circuit, etc.) structured to perform the corresponding operation without executing software or firmware. The processor circuitry may be distributed in different network locations and/or local to one or more devices (e.g., a multi-core processor in a single machine, multiple processors distributed across a server rack, etc.). The machine readable instructions described herein may be stored in one or more of a compressed format, an encrypted format, a fragmented format, a compiled format, an executable format, a packaged format, etc. Machine readable instructions as described herein may be stored as data or a data structure (e.g., portions of instructions, code, representations of code, etc.) that may be utilized to create, manufacture, and/or produce machine executable instructions. For example, the machine readable instructions may be fragmented and stored on one or more storage devices and/or computing devices (e.g., servers) located at the same or different locations of a network or collection of networks (e.g., in the cloud, in edge devices, etc.). The machine readable instructions may require one or more of installation, modification, adaptation, updating, combining, supplementing, configuring, decryption, decompression, unpacking, distribution, reassignment, compilation, etc. in order to make them directly readable, interpretable, and/or executable by a computing device and/or other machine. For example, the machine readable instructions may be stored in multiple parts, which are individually compressed, encrypted, and stored on separate computing devices, wherein the parts when decrypted, decompressed, and combined form a set of executable instructions that implement one or more functions that may together form a program such as that described herein. In another example, the machine readable instructions may be stored in a state in which they may be read by processor circuitry, but require addition of a library (e.g., a dynamic link library (DLL)), a software development kit (SDK), an application programming interface (API), etc. in order to execute the instructions on a particular computing device or other device. In another example, the machine readable instructions may need to be configured (e.g., settings stored, data input, network addresses recorded, etc.) before the machine readable instructions and/or the corresponding program(s) can be executed in whole or in part. Thus, machine readable media, as used herein, may include machine readable instructions and/or program(s) regardless of the particular format or state of the machine readable instructions and/or program(s) when stored or otherwise at rest or in transit. The machine readable instructions described herein can be represented by any past, present, or future instruction language, scripting language, programming language, etc. For example, the machine readable instructions may be represented using any of the following languages: C, C++, Java, C#, Perl, Python, JavaScript, HyperText Markup Language (HTML), Structured Query Language (SQL), Swift, etc. As mentioned above, the example processes ofFIG.4may be implemented using executable instructions (e.g., computer and/or machine readable instructions) stored on a non-transitory computer and/or machine readable medium such as a hard disk drive, a flash memory, a read-only memory, a compact disk, a digital versatile disk, a cache, a random-access memory and/or any other storage device or storage disk in which information is stored for any duration (e.g., for extended time periods, permanently, for brief instances, for temporarily buffering, and/or for caching of the information). As used herein, the term non-transitory computer readable medium is expressly defined to include any type of computer readable storage device and/or storage disk and to exclude propagating signals and to exclude transmission media. “Including” and “comprising” (and all forms and tenses thereof) are used herein to be open ended terms. Thus, whenever a claim employs any form of “include” or “comprise” (e.g., comprises, includes, comprising, including, having, etc.) as a preamble or within a claim recitation of any kind, it is to be understood that additional elements, terms, etc. may be present without falling outside the scope of the corresponding claim or recitation. As used herein, when the phrase “at least” is used as the transition term in, for example, a preamble of a claim, it is open-ended in the same manner as the term “comprising” and “including” are open ended. The term “and/or” when used, for example, in a form such as A, B, and/or C refers to any combination or subset of A, B, C such as (1) A alone, (2) B alone, (3) C alone, (4) A with B, (5) A with C, (6) B with C, and (7) A with B and with C. As used herein in the context of describing structures, components, items, objects and/or things, the phrase “at least one of A and B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. Similarly, as used herein in the context of describing structures, components, items, objects and/or things, the phrase “at least one of A or B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. As used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or steps, the phrase “at least one of A and B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. Similarly, as used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or steps, the phrase “at least one of A or B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. As used herein, singular references (e.g., “a”, “an”, “first”, “second”, etc.) do not exclude a plurality. The term “a” or “an” entity, as used herein, refers to one or more of that entity. The terms “a” (or “an”), “one or more”, and “at least one” can be used interchangeably herein. Furthermore, although individually listed, a plurality of means, elements or method actions may be implemented by, e.g., a single unit or processor. Additionally, although individual features may be included in different examples or claims, these may possibly be combined, and the inclusion in different examples or claims does not imply that a combination of features is not feasible and/or advantageous. The program ofFIG.4begins at block402where the example audio sensor interface302receives a first audio signal of an external sound from a first microphone (e.g., the microphone106ofFIG.1). At block404, the example audio sensor interface302receives a second audio signal of the external sound from a second microphone (e.g., the microphone108ofFIG.1). At block406, the example audio signal analyzer304identifies a first signal frame (e.g., the first signal frame206ofFIG.2) from the first audio signal. At block408, the example audio signal analyzer304identifies a second signal frame (e.g., the second signal frame208ofFIG.2) from the second audio signal. At block410, the example audio signal analyzer304normalizes the first and second signal frames206,208. At block412, the example cross-correlation analyzer306generates a cross-correlation vector (e.g., the cross-correlation vector210ofFIG.2) corresponding to the first and second signal frames206,208. At block414, the example cross-correlation analyzer306identifies a segment (e.g., the segment214ofFIG.2) of the cross-correlation vector210. At block416, the example orientation analyzer312determines a current orientation of the computing device100. As used herein, determining an orientation of the computing device100includes determining an orientation of a portion of the computing device100including the microphones106,108(e.g., the orientation of the lid104of a laptop). At block418, the example location analyzer308determines whether there is a machine learning model available for the current orientation. If so, control advances to block420where the example location analyzer308selects the machine learning model to analyze the vector segment214. In some examples, the orientation of the computing device100may be assumed and/or there may only be one machine learning model. In some such examples, blocks416-420may be omitted. At block422, the example location analyzer308determines the direction of the source of the external sound based on the vector segment using the selected machine learning model. At block424, the example response generator314implements a response based on the direction of the source. Thereafter, control advances to block426. Returning to block418, if the example location analyzer308determines that there is no machine learning model available for the current orientation, control advances directly to block426. At block426, the example sound source location analyzer138determines whether to continue. If so, control returns to block402to repeat the process. Otherwise, the example program ofFIG.4ends. Experimental testing has confirmed that systems implementing examples disclosed herein are able to identify the location or direction of a source of sound across a 360 degree space surrounding a computing device using the audio feedback signals of only two microphones with a relatively high degree of accuracy (e.g., greater than 95% accuracy). In particular, a computing device with a form factor similar to the laptop shown inFIG.1was tested by using the two microphones106,108to capture voice recordings made at 30 degree intervals surrounding the computing device (e.g., at the locations of the angles114-136shown inFIG.1) at a distance of approximately 3 meters. The recordings at each angle114-136lasted for approximately 4 minutes and were captured by the microphones106,108at a sampling frequency of approximately 48 kHz. Successive, individual signal frames of 8192 samples (corresponding to approximately 17 ms) from each audio signal were used to generate corresponding cross-correlation vectors (e.g., the cross-correlation vector210ofFIG.2). Truncated mid-sections or segments214of the cross-correlation vectors210were then used as feature inputs for a single layer 64-neuron shallow neural network classifier. The classifier was configured to have 13 classes including 12 classes for the angles from 0° to 330° in 30° intervals (as shown inFIG.1) and one no noise class (e.g., a junk class). The vector segments214of the cross-correlation of the different signal frames206,208were labelled with the position (e.g., angle114-136) at which the recordings were made to serve as ground truth training data to train the neural network. More particular, this training data was divided between 272,000 training blocks, 58,300 validation blocks, and 58,300 testing blocks. FIG.5is a heatmap500that represents the feature extraction of all samples (e.g., all segmented cross-correlation vectors) captured at each angle114-136. More particularly, the samples corresponding to each angle114-136are grouped in blocks in the heatmap with the samples associated with the first angle114(e.g., corresponding to the 0° position) at the top of the heatmap500and the last angle136(e.g., corresponding to the 330° position) at the bottom of the heatmap500. The X-axis (corresponding to the width of the heatmap500) represents the length or number of vector elements included in the segment214of the cross-correlation vector210used in the analysis. The color or shading on the heatmap500is representative of the value of the elements in each vector segment, with the lightest colored regions correspond to the peak value212in each vector segment214. Thus, as shown in the heatmap500ofFIG.5, the position of the peak value212changes as the recordings were captured from the different angles114-136, thereby making it possible to identify the particular angle114-136associated with any given sound recording. However, the positions of the peak values212along the X-axis inFIG.5associated with the fifth angle122(e.g., the 120° position) are approximately the same as the positions of the peak values212associated with the eleventh angle134(e.g., the 300° position) even though the two angles are 180 degrees apart. As such, the position of the peak value212is insufficient by itself to determine the location of a source of sound across a full 360 degree area. However, by providing the values in the vector segment214surrounding the peak value212as inputs to the machine learning model, as disclosed herein, it is possible to reliable identify sound originating from any one of the angles114-136. This is demonstrated by the confusion matrix600ofFIG.6, which indicates an accuracy of greater than 96% across all angles around a full 360 degrees of rotation. FIG.7is a block diagram of an example processor platform700structured to execute the instructions ofFIG.4to implement the sound source location analyzer138ofFIGS.1and/or3. The processor platform700can be, for example, a server, a personal computer, a workstation, a self-learning machine (e.g., a neural network), a mobile device (e.g., a cell phone, a smart phone, a tablet such as an iPad), a personal digital assistant (PDA), an Internet appliance, or any other type of computing device. The processor platform700of the illustrated example includes a processor712. The processor712of the illustrated example is hardware. For example, the processor712can be implemented by one or more integrated circuits, logic circuits, microprocessors, GPUs, DSPs, or controllers from any desired family or manufacturer. The hardware processor may be a semiconductor based (e.g., silicon based) device. In this example, the processor implements the example audio signal analyzer304, the example cross-correlation analyzer306, the example location analyzer308, the example orientation analyzer312, and the example response generator314. The processor712of the illustrated example includes a local memory713(e.g., a cache). The processor712of the illustrated example is in communication with a main memory including a volatile memory714and a non-volatile memory716via a bus718. The volatile memory714may be implemented by Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS® Dynamic Random Access Memory (RDRAM®) and/or any other type of random access memory device. The non-volatile memory716may be implemented by flash memory and/or any other desired type of memory device. Access to the main memory714,716is controlled by a memory controller. The processor platform700of the illustrated example also includes an interface circuit720. The interface circuit720may be implemented by any type of interface standard, such as an Ethernet interface, a universal serial bus (USB), a Bluetooth® interface, a near field communication (NFC) interface, and/or a PCI express interface. In this example, the interface circuit720implements the example audio sensor interface302. In the illustrated example, one or more input devices722are connected to the interface circuit720. The input device(s)722permit(s) a user to enter data and/or commands into the processor712. The input device(s) can be implemented by, for example, an audio sensor, a microphone, a camera (still or video), a keyboard, a button, a mouse, a touchscreen, a track-pad, a trackball, isopoint and/or a voice recognition system. One or more output devices724are also connected to the interface circuit720of the illustrated example. The output devices724can be implemented, for example, by display devices (e.g., a light emitting diode (LED), an organic light emitting diode (OLED), a liquid crystal display (LCD), a cathode ray tube display (CRT), an in-place switching (IPS) display, a touchscreen, etc.), a tactile output device, a printer and/or speaker. The interface circuit720of the illustrated example, thus, typically includes a graphics driver card, a graphics driver chip and/or a graphics driver processor. The interface circuit720of the illustrated example also includes a communication device such as a transmitter, a receiver, a transceiver, a modem, a residential gateway, a wireless access point, and/or a network interface to facilitate exchange of data with external machines (e.g., computing devices of any kind) via a network726. The communication can be via, for example, an Ethernet connection, a digital subscriber line (DSL) connection, a telephone line connection, a coaxial cable system, a satellite system, a line-of-site wireless system, a cellular telephone system, etc. The processor platform700of the illustrated example also includes one or more mass storage devices728for storing software and/or data. Examples of such mass storage devices728include floppy disk drives, hard drive disks, compact disk drives, Blu-ray disk drives, redundant array of independent disks (RAID) systems, and digital versatile disk (DVD) drives. In this example, the mass storage device728includes the example model data store310. The machine executable instructions732ofFIG.4may be stored in the mass storage device728, in the volatile memory714, in the non-volatile memory716, and/or on a removable non-transitory computer readable storage medium such as a CD or DVD. From the foregoing, it will be appreciated that example methods, apparatus and articles of manufacture have been disclosed that enable the 360 degree detection of a source of sound using the audio signals of no more than two microphones. As a result, sound source location detection may be implemented in a more cost effective manner because examples disclosed herein do not require larger arrays of microphones as is common in other sound source detection systems. Furthermore, examples disclosed herein achieve relatively high accuracy (e.g., greater than 95%) using a shallow neural network that does not require the same computational capacity of existing approaches that rely on computationally intensive cross-correlations based on FFT calculations and/or deep learning algorithms. Therefore, example disclosed herein improve the efficiency of using a computing device by reducing the demand on computational overhead. The disclosed methods, apparatus and articles of manufacture are accordingly directed to one or more improvement(s) in the functioning of a computer. Example methods, apparatus, systems, and articles of manufacture to detect the location of sound sources external to computing devices are disclosed herein. Further examples and combinations thereof include the following: Example 1 includes an apparatus to determine a direction of a source of a sound relative to a computing device, the apparatus comprising a cross-correlation analyzer to generate a vector of values corresponding to a cross-correlation of first and second audio signals corresponding to the sound, the first audio signal received from a first microphone of the computing device, the second audio signal received from a second microphone of the computing device, and a location analyzer to use a machine learning model and a set of the values of the vector to determine the direction of the source of the sound. Example 2 includes the apparatus of example 1, wherein the location analyzer is to use the machine learning model to determine the direction of the source of the sound across 360 degrees of space surrounding the computing device without feedback from additional microphones other than the first and second microphones. Example 3 includes the apparatus of any one of examples 1 and 2, wherein the machine learning model is to distinguish between a first source of sound located in front of the computing device and a second source of sound located behind the computing device. Example 4 includes the apparatus of any one of examples 1-3, wherein the first and second microphones are to be spaced apart on a surface of the computing device, the first and second microphones facing a same direction as the surface. Example 5 includes the apparatus of example 4, further including using the machine learning model to determine the direction of the source of the sound from among a plurality of possible directions, the plurality of possible directions including first directions distributed across a front 180 degree region and second directions distributed across a back 180 degree region, the front 180 degree region corresponding to an area in front of the surface, the back 180 degree region corresponding to an area behind the surface. Example 6 includes the apparatus of any one of examples 1-5, wherein the vector corresponds to a generalized cross correlation with phase transform of the first and second audio signals. Example 7 includes the apparatus of any one of examples 1-6, wherein the set of the values corresponds to less than all values in the vector. Example 8 includes the apparatus of any one of examples 1-7, wherein the set of the values corresponds to a segment of the vector including more than 1% of all of the values. Example 9 includes the apparatus of any one of examples 1-8, wherein the set of the values corresponds to a segment of the vector including at least a threshold number of the values, the threshold number being at least twice a number of samples corresponding to a time difference of arrival for a sound originating at a point collinear with the first and second microphones. Example 10 includes the apparatus of any one of examples 1-9, wherein the set of values corresponds to a mid-section of the vector, the mid-section excluding ones of the values on either side of the mid-section. Example 11 includes the apparatus of any one of examples 1-10, wherein the set of the values corresponds to a segment of the vector that surrounds a peak value in the vector. Example 12 includes the apparatus of any one of examples 1-11, further including a response generator to generate a response based on the direction of the source of the sound. Example 13 includes the apparatus of example 12, wherein the response generator is to at least one of isolate the sound when the direction of the source of the sound is within a threshold angle of a first direction, or reduce noise associated with the sound when the direction of the source of the sound is outside of the threshold angle of the first direction. Example 14 includes the apparatus of example 12, wherein the response generator is to in response to the direction of the source corresponding to a front of the computing device, identify the sound as originating from a user of the computing device, and in response to the direction of the source corresponding to a rear of the computing device, disregard the sound. Example 15 includes the apparatus of any one of examples 1-14, wherein the machine learning model is implemented by a shallow neural network. Example 16 includes a non-transitory computer readable medium comprising instructions that, when executed, cause a machine to at least generate a vector of values corresponding to a cross-correlation of first and second audio signals corresponding to a sound, the first audio signal received from a first microphone of a computing device, the second audio signal received from a second microphone of the computing device, and using a machine learning model and a set of the values of the vector to determine a direction of a source of the sound. Example 17 includes the computer readable medium of example 16, wherein the instructions further cause the machine to use the machine learning model to determine the direction of the source of the sound across 360 degrees of space surrounding the computing device without feedback from additional microphones other than the first and second microphones. Example 18 includes the computer readable medium of any one of examples 16 and 17, wherein the machine learning model is to distinguish between a first source of sound located in front of the computing device and a second source of sound located behind the computing device. Example 19 includes the computer readable medium of any one of examples 16-18, wherein the first and second microphones are to be spaced apart on a surface of the computing device, the first and second microphones facing a same direction as the surface. Example 20 includes the computer readable medium of example 19, wherein the instructions further cause the machine to use the machine learning model to determine the direction of the source of the sound from among a plurality of possible directions, the plurality of possible directions including first directions distributed across a front 180 degree region and second directions distributed across a back 180 degree region, the front 180 degree region corresponding to an area in front of the surface, the back 180 degree region corresponding to an area behind the surface. Example 21 includes the computer readable medium of any one of examples 16-20, wherein the vector corresponds to a generalized cross correlation with phase transform of the first and second audio signals. Example 22 includes the computer readable medium of any one of examples 16-21, wherein the set of the values corresponds to less than all values in the vector. Example 23 includes the computer readable medium of any one of examples 16-22, wherein the set of the values corresponds to a segment of the vector including more than 1% of all of the values. Example 24 includes the computer readable medium of any one of examples 16-23, wherein the set of the values corresponds to a segment of the vector including at least a threshold number of the values, the threshold number being at least twice a number of samples corresponding to a time difference of arrival for a sound originating at a point collinear with the first and second microphones. Example 25 includes the computer readable medium of any one of examples 16-24, wherein the set of values corresponds to a mid-section of the vector, the mid-section excluding ones of the values on either side of the mid-section. Example 26 includes the computer readable medium of any one of examples 16-25, wherein the set of the values corresponds to a segment of the vector that surrounds a peak value in the vector. Example 27 includes the computer readable medium of any one of examples 16-26, wherein the instructions further cause the machine to generate a response based on the direction of the source of the sound. Example 28 includes the computer readable medium of example 27, wherein the instructions further cause the machine to at least one of isolate the sound when the direction of the source of the sound is within a threshold angle of a first direction, or reduce noise associated with the sound when the direction of the source of the sound is outside of the threshold angle of the first direction. Example 29 includes the computer readable medium of example 27, wherein the instructions further cause the machine to in response to the direction of the source corresponding to a front of the computing device, identify the sound as originating from a user of the computing device, and in response to the direction of the source corresponding to a rear of the computing device, disregard the sound. Example 30 includes the computer readable medium of any one of examples 16-29, wherein the machine learning model is implemented by a shallow neural network. Example 31 includes a method to determine a direction of a source of a sound relative to a computing device, the method comprising generating a vector of values corresponding to a cross-correlation of first and second audio signals corresponding to the sound, the first audio signal received from a first microphone of the computing device, the second audio signal received from a second microphone of the computing device, and using a machine learning model and a set of the values of the vector to determine the direction of the source of the sound. Example 32 includes the method of example 31, further including using the machine learning model to determine the direction of the source of the sound across 360 degrees of space surrounding the computing device without feedback from additional microphones other than the first and second microphones. Example 33 includes the method of any one of examples 31 and 32, wherein the machine learning model is to distinguish between a first source of sound located in front of the computing device and a second source of sound located behind the computing device. Example 34 includes the method of any one of examples 31-33, wherein the first and second microphones are to be spaced apart on a surface of the computing device, the first and second microphones facing a same direction as the surface. Example 35 includes the method of example 34, further including using the machine learning model to determine the direction of the source of the sound from among a plurality of possible directions, the plurality of possible directions including first directions distributed across a front 180 degree region and second directions distributed across a back 180 degree region, the front 180 degree region corresponding to an area in front of the surface, the back 180 degree region corresponding to an area behind the surface. Example 36 includes the method of any one of examples 31-35, wherein the vector corresponds to a generalized cross correlation with phase transform of the first and second audio signals. Example 37 includes the method of any one of examples 31-36, wherein the set of the values corresponds to less than all values in the vector. Example 38 includes the method of any one of examples 31-37, wherein the set of the values corresponds to a segment of the vector including more than 1% of all of the values. Example 39 includes the method of any one of examples 31-38, wherein the set of the values corresponds to a segment of the vector including at least a threshold number of the values, the threshold number being at least twice a number of samples corresponding to a time difference of arrival for a sound originating at a point collinear with the first and second microphones. Example 40 includes the method of any one of examples 31-39, wherein the set of values corresponds to a mid-section of the vector, the mid-section excluding ones of the values on either side of the mid-section. Example 41 includes the method of any one of examples 31-40, wherein the set of the values corresponds to a segment of the vector that surrounds a peak value in the vector. Example 42 includes the method of any one of examples 31-41, further including generating a response based on the direction of the source of the sound. Example 43 includes the method of example 42, wherein the response includes at least one of isolate the sound when the direction of the source of the sound is within a threshold angle of a first direction, or reduce noise associated with the sound when the direction of the source of the sound is outside of the threshold angle of the first direction. Example 44 includes the method of example 42, wherein the response includes in response to the direction of the source corresponding to a front of the computing device, identifying the sound as originating from a user of the computing device, and in response to the direction of the source corresponding to a rear of the computing device, disregarding the sound. Example 45 includes the method of any one of examples 31-44, wherein the machine learning model is implemented by a shallow neural network. Example 46 includes an apparatus to determine a direction of a source of a sound relative to a computing device, the apparatus comprising means for generating a vector of values corresponding to a cross-correlation of first and second audio signals corresponding to the sound, the first audio signal received from a first microphone of the computing device, the second audio signal received from a second microphone of the computing device, and means for using a machine learning model and a set of the values of the vector to determine the direction of the source of the sound. Example 47 includes the apparatus of example 46, wherein the means for using the machine learning model is to determine the direction of the source of the sound across 360 degrees of space surrounding the computing device without feedback from additional microphones other than the first and second microphones. Example 48 includes the apparatus of any one of examples 46 and 47, wherein the machine learning model is to distinguish between a first source of sound located in front of the computing device and a second source of sound located behind the computing device. Example 49 includes the apparatus of any one of examples 46-48, wherein the first and second microphones are to be spaced apart on a surface of the computing device, the first and second microphones facing a same direction as the surface. Example 50 includes the apparatus of example 49, wherein the means for using the machine learning model is to determine the direction of the source of the sound from among a plurality of possible directions, the plurality of possible directions including first directions distributed across a front 180 degree region and second directions distributed across a back 180 degree region, the front 180 degree region corresponding to an area in front of the surface, the back 180 degree region corresponding to an area behind the surface. Example 51 includes the apparatus of any one of examples 46-50, wherein the vector corresponds to a generalized cross correlation with phase transform of the first and second audio signals. Example 52 includes the apparatus of any one of examples 46-51, wherein the set of the values corresponds to less than all values in the vector. Example 53 includes the apparatus of any one of examples 46-52, wherein the set of the values corresponds to a segment of the vector including more than 1% of all of the values. Example 54 includes the apparatus of any one of examples 46-53, wherein the set of the values corresponds to a segment of the vector including at least a threshold number of the values, the threshold number being at least twice a number of samples corresponding to a time difference of arrival for a sound originating at a point collinear with the first and second microphones. Example 55 includes the apparatus of any one of examples 46-54, wherein the set of values corresponds to a mid-section of the vector, the mid-section excluding ones of the values on either side of the mid-section. Example 56 includes the apparatus of any one of examples 46-55, wherein the set of the values corresponds to a segment of the vector that surrounds a peak value in the vector. Example 57 includes the apparatus of any one of examples 46-56, further including means for generating a response based on the direction of the source of the sound. Example 58 includes the apparatus of example 57, wherein the response generating means is to at least one of isolate the sound when the direction of the source of the sound is within a threshold angle of a first direction, or reduce noise associated with the sound when the direction of the source of the sound is outside of the threshold angle of the first direction. Example 59 includes the apparatus of example 57, wherein the response generating means is to in response to the direction of the source corresponding to a front of the computing device, identify the sound as originating from a user of the computing device, and in response to the direction of the source corresponding to a rear of the computing device, disregard the sound. Example 60 includes the apparatus of any one of examples 46-58, wherein the machine learning model is implemented by a shallow neural network. Although certain example methods, apparatus and articles of manufacture have been disclosed herein, the scope of coverage of this patent is not limited thereto. On the contrary, this patent covers all methods, apparatus and articles of manufacture fairly falling within the scope of the claims of this patent. The following claims are hereby incorporated into this Detailed Description by this reference, with each claim standing on its own as a separate embodiment of the present disclosure.
77,414
11860289
DETAILED DESCRIPTION One or more embodiments are now described with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the various embodiments. It is evident, however, that the various embodiments can be practiced without these specific details, and without applying to any particular networked environment or standard. One or more aspects of the technology described herein are generally directed towards user equipment geolocation, i.e., identifying physical locations at which user equipment is or was located. Network measurement data associated with user equipment can be separated into static periods in which the user equipment was not moving, and moving periods in which the user equipment was moving. Static location processing can be applied to determine static locations from the static period network measurements, and moving location processing can be applied to determine moving locations from the moving period network measurements. Resulting static location information and moving location information can then be merged in order to improve the accuracy of both the static and the moving location information. The enhanced accuracy location information can be stored and used for any desired application. Further aspects and embodiments of this disclosure are described in detail below. As used in this disclosure, in some embodiments, the terms “component,” “system” and the like are intended to refer to, or comprise, a computer-related entity or an entity related to an operational apparatus with one or more specific functionalities, wherein the entity can be either hardware, a combination of hardware and software, software, or software in execution. As an example, a component can be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, computer-executable instructions, a program, and/or a computer. By way of illustration and not limitation, both an application running on a server and the server can be a component. One or more components can reside within a process and/or thread of execution and a component can be localized on one computer and/or distributed between two or more computers. In addition, these components can execute from various computer readable media having various data structures stored thereon. The components can communicate via local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the internet with other systems via the signal). As another example, a component can be an apparatus with specific functionality provided by mechanical parts operated by electric or electronic circuitry, which is operated by a software application or firmware application executed by a processor, wherein the processor can be internal or external to the apparatus and executes at least a part of the software or firmware application. As yet another example, a component can be an apparatus that provides specific functionality through electronic components without mechanical parts, the electronic components can comprise a processor therein to execute software or firmware that confers at least in part the functionality of the electronic components. While various components have been illustrated as separate components, it will be appreciated that multiple components can be implemented as a single component, or a single component can be implemented as multiple components, without departing from example embodiments. The term “facilitate” as used herein is in the context of a system, device or component “facilitating” one or more actions or operations, in respect of the nature of complex computing environments in which multiple components and/or multiple devices can be involved in some computing operations. Non-limiting examples of actions that may or may not involve multiple components and/or multiple devices comprise transmitting or receiving data, establishing a connection between devices, determining intermediate results toward obtaining a result, etc. In this regard, a computing device or component can facilitate an operation by playing any part in accomplishing the operation. When operations of a component are described herein, it is thus to be understood that where the operations are described as facilitated by the component, the operations can be optionally completed with the cooperation of one or more other computing devices or components, such as, but not limited to, sensors, antennae, audio and/or visual output devices, other devices, etc. Further, the various embodiments can be implemented as a method, apparatus or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware or any combination thereof to control a computer to implement the disclosed subject matter. The term “article of manufacture” as used herein is intended to encompass a computer program accessible from any computer-readable (or machine-readable) device or computer-readable (or machine-readable) storage/communications media. For example, computer readable storage media can comprise, but are not limited to, magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips), optical disks (e.g., compact disk (CD), digital versatile disk (DVD)), smart cards, and flash memory devices (e.g., card, stick, key drive). Of course, those skilled in the art will recognize many modifications can be made to this configuration without departing from the scope or spirit of the various embodiments. Moreover, terms such as “mobile device equipment,” “mobile station,” “mobile,” “subscriber station,” “access terminal,” “terminal,” “handset,” “communication device,” “mobile device” (and/or terms representing similar terminology) can refer to a wireless device utilized by a subscriber or mobile device of a wireless communication service to receive or convey data, control, voice, video, sound, gaming or substantially any data-stream or signaling-stream. The foregoing terms are utilized interchangeably herein and with reference to the related drawings. Likewise, the terms “access point (AP),” “Base Station (BS),” “BS transceiver,” “BS device,” “cell site,” “cell site device,” “gNode B (gNB),” “evolved Node B (eNode B, eNB),” “home Node B (HNB)” and the like, refer to wireless network components or appliances that transmit and/or receive data, control, voice, video, sound, gaming or substantially any data-stream or signaling-stream from one or more subscriber stations. Data and signaling streams can be packetized or frame-based flows. Furthermore, the terms “device,” “communication device,” “mobile device,” “subscriber,” “customer entity,” “consumer,” “customer entity,” “entity” and the like are employed interchangeably throughout, unless context warrants particular distinctions among the terms. It should be appreciated that such terms can refer to human entities or automated components supported through artificial intelligence (e.g., a capacity to make inference based on complex mathematical formalisms), which can provide simulated vision, sound recognition and so forth. It should be noted that although various aspects and embodiments have been described herein in the context of 4G, 5G, or other next generation networks, the disclosed aspects are not limited to a 4G or 5G implementation, and/or other network next generation implementations, as the techniques can also be applied, for example, in third generation (3G), or other 4G systems. In this regard, aspects or features of the disclosed embodiments can be exploited in substantially any wireless communication technology. Such wireless communication technologies can include universal mobile telecommunications system (UMTS), global system for mobile communication (GSM), code division multiple access (CDMA), wideband CDMA (WCMDA), CDMA2000, time division multiple access (TDMA), frequency division multiple access (FDMA), multi-carrier CDMA (MC-CDMA), single-carrier CDMA (SC-CDMA), single-carrier FDMA (SC-FDMA), orthogonal frequency division multiplexing (OFDM), discrete Fourier transform spread OFDM (DFT-spread OFDM), single carrier FDMA (SC-FDMA), filter bank based multi-carrier (FBMC), zero tail DFT-spread-OFDM (ZT DFT-s-OFDM), generalized frequency division multiplexing (GFDM), fixed mobile convergence (FMC), universal fixed mobile convergence (UFMC), unique word OFDM (UW-OFDM), unique word DFT-spread OFDM (UW DFT-Spread-OFDM), cyclic prefix OFDM (CP-OFDM), resource-block-filtered OFDM, wireless fidelity (Wi-Fi), worldwide interoperability for microwave access (WiMAX), wireless local area network (WLAN), general packet radio service (GPRS), enhanced GPRS, third generation partnership project (3GPP), long term evolution (LTE), LTE frequency division duplex (FDD), time division duplex (TDD), 5G, third generation partnership project 2 (3GPP2), ultra mobile broadband (UMB), high speed packet access (HSPA), evolved high speed packet access (HSPA+), high-speed downlink packet access (HSDPA), high-speed uplink packet access (HSUPA), Zigbee, or another institute of electrical and electronics engineers (IEEE) 802.12 technology. In this regard, all or substantially all aspects disclosed herein can be exploited in legacy telecommunication technologies. FIG.1illustrates a non-limiting example of a wireless communication system100which can be used in connection with at least some embodiments of the subject disclosure. In one or more embodiments, system100can comprise one or more user equipment UEs1021,1022, referred to collectively as UEs102, a network node104that supports cellular communications in a service area110, also known as a cell, and communication service provider network(s)106. The non-limiting term “user equipment” can refer to any type of device that can communicate with a network node104in a cellular or mobile communication system100. UEs102can have one or more antenna panels having vertical and horizontal elements. Examples of UEs102comprise target devices, device to device (D2D) UEs, machine type UEs or UEs capable of machine to machine (M2M) communications, personal digital assistants (PDAs), tablets, mobile terminals, smart phones, laptop mounted equipment (LME), universal serial bus (USB) dongles enabled for mobile communications, computers having mobile capabilities, mobile devices such as cellular phones, laptops having laptop embedded equipment (LEE, such as a mobile broadband adapter), tablet computers having mobile broadband adapters, wearable devices, virtual reality (VR) devices, heads-up display (HUD) devices, smart cars, machine-type communication (MTC) devices, augmented reality head mounted displays, and the like. UEs102can also comprise IOT devices that communicate wirelessly. In various embodiments, system100comprises communication service provider network(s)106serviced by one or more wireless communication network providers. Communication service provider network(s)106can comprise a “core network”. In example embodiments, UEs102can be communicatively coupled to the communication service provider network(s)106via network node104. The network node104(e.g., network node device) can communicate with UEs102, thus providing connectivity between the UEs102and the wider cellular network. The UEs102can send transmission type recommendation data to the network node104. The transmission type recommendation data can comprise a recommendation to transmit data via a closed loop multiple input multiple output (MIMO) mode and/or a rank-1 precoder mode. A network node104can have a cabinet and other protected enclosures, computing devices, an antenna mast, and multiple antennas for performing various transmission operations (e.g., MIMO operations) and for directing/steering signal beams. Network node104can comprise one or more base station devices which implement features of the network node104. Network nodes can serve several cells, depending on the configuration and type of antenna. In example embodiments, UEs102can send and/or receive communication data via a wireless link to the network node104. The dashed arrow lines from the network node104to the UEs102represent downlink (DL) communications to the UEs102. The solid arrow lines from the UEs102to the network node104represent uplink (UL) communications. Communication service provider networks106can facilitate providing wireless communication services to UEs102via the network node104and/or various additional network devices (not shown) included in the one or more communication service provider networks106. The one or more communication service provider networks106can comprise various types of disparate networks, including but not limited to: cellular networks, femto networks, picocell networks, microcell networks, internet protocol (IP) networks Wi-Fi service networks, broadband service network, enterprise networks, cloud based networks, millimeter wave networks and the like. For example, in at least one implementation, system100can be or comprise a large scale wireless communication network that spans various geographic areas. According to this implementation, the one or more communication service provider networks106can be or comprise the wireless communication network and/or various additional devices and components of the wireless communication network (e.g., additional network devices and cell, additional UEs, network server devices, etc.). The network node104can be connected to the one or more communication service provider networks106via one or more backhaul links108. For example, the one or more backhaul links108can comprise wired link components, such as a T1/E1 phone line, a digital subscriber line (DSL) (e.g., either synchronous or asynchronous), an asymmetric DSL (ADSL), an optical fiber backbone, a coaxial cable, and the like. The one or more backhaul links108can also comprise wireless link components, such as but not limited to, line-of-sight (LOS) or non-LOS links which can comprise terrestrial air-interfaces or deep space links (e.g., satellite communication links for navigation). Backhaul links108can be implemented via a “transport network” in some embodiments. In another embodiment, network node104can be part of an integrated access and backhaul network. This may allow easier deployment of a dense network of self-backhauled 5G cells in a more integrated manner by building upon many of the control and data channels/procedures defined for providing access to UEs. Wireless communication system100can employ various cellular systems, technologies, and modulation modes to facilitate wireless radio communications between devices (e.g., the UE102and the network node104). While example embodiments might be described for 5G new radio (NR) systems, the embodiments can be applicable to any radio access technology (RAT) or multi-RAT system where the UE operates using multiple carriers, e.g., LTE FDD/TDD, GSM/GERAN, CDMA2000 etc. For example, system100can operate in accordance with any 5G, next generation communication technology, or existing communication technologies, various examples of which are listed supra. In this regard, various features and functionalities of system100are applicable where the devices (e.g., the UEs102and the network device104) of system100are configured to communicate wireless signals using one or more multi carrier modulation schemes, wherein data symbols can be transmitted simultaneously over multiple frequency subcarriers (e.g., OFDM, CP-OFDM, DFT-spread OFMD, UFMC, FMBC, etc.). The embodiments are applicable to single carrier as well as to multicarrier (MC) or carrier aggregation (CA) operation of the UE. The term carrier aggregation (CA) is also called (e.g. interchangeably called) “multi-carrier system”, “multi-cell operation”, “multi-carrier operation”, “multi-carrier” transmission and/or reception. Note that some embodiments are also applicable for Multi RAB (radio bearers) on some carriers (that is data plus speech is simultaneously scheduled). In various embodiments, system100can be configured to provide and employ 5G or subsequent generation wireless networking features and functionalities. 5G wireless communication networks are expected to fulfill the demand of exponentially increasing data traffic and to allow people and machines to enjoy gigabit data rates with virtually zero (e.g., single digit millisecond) latency. Compared to 4G, 5G supports more diverse traffic scenarios. For example, in addition to the various types of data communication between conventional UEs (e.g., phones, smartphones, tablets, PCs, televisions, internet enabled televisions, AR/VR head mounted displays (HMDs), etc.) supported by 4G networks, 5G networks can be employed to support data communication between smart cars in association with driverless car environments, as well as machine type communications (MTCs). Considering the drastic different communication needs of these different traffic scenarios, the ability to dynamically configure waveform parameters based on traffic scenarios while retaining the benefits of multi carrier modulation schemes (e.g., OFDM and related schemes) can provide a significant contribution to the high speed/capacity and low latency demands of 5G networks. With waveforms that split the bandwidth into several sub-bands, different types of services can be accommodated in different sub-bands with the most suitable waveform and numerology, leading to an improved spectrum utilization for 5G networks. To meet the demand for data centric applications, features of 5G networks can comprise: increased peak bit rate (e.g., 20 Gbps), larger data volume per unit area (e.g., high system spectral efficiency—for example about 3.5 times that of spectral efficiency of long term evolution (LTE) systems), high capacity that allows more device connectivity both concurrently and instantaneously, lower battery/power consumption (which reduces energy and consumption costs), better connectivity regardless of the geographic region in which a user is located, a larger numbers of devices, lower infrastructural development costs, and higher reliability of the communications. Thus, 5G networks can allow for: data rates of several tens of megabits per second should be supported for tens of thousands of users, 1 gigabit per second to be offered simultaneously to tens of workers on the same office floor, for example; several hundreds of thousands of simultaneous connections to be supported for massive sensor deployments; improved coverage, enhanced signaling efficiency; reduced latency compared to LTE. The 5G access network can utilize higher frequencies (e.g., >6 GHz) to aid in increasing capacity. Currently, much of the millimeter wave (mmWave) spectrum, the band of spectrum between 30 GHz and 300 GHz is underutilized. The millimeter waves have shorter wavelengths that range from 10 millimeters to 1 millimeter, and these mmWave signals experience severe path loss, penetration loss, and fading. However, the shorter wavelength at mmWave frequencies also allows more antennas to be packed in the same physical dimension, which allows for large-scale spatial multiplexing and highly directional beamforming. Performance can be improved if both the transmitter and the receiver are equipped with multiple antennas. Multi-antenna techniques can significantly increase the data rates and reliability of a wireless communication system. The use of multiple input multiple output (MIMO) techniques, which was introduced in the 3GPP and has been in use (including with LTE), is a multi-antenna technique that can improve the spectral efficiency of transmissions, thereby significantly boosting the overall data carrying capacity of wireless systems. The use of MIMO techniques can improve mmWave communications and has been widely recognized as a potentially important component for access networks operating in higher frequencies. MIMO can be used for achieving diversity gain, spatial multiplexing gain and beamforming gain. For these reasons, MIMO systems are an important part of the 3rd and 4th generation wireless systems and are in use in 5G systems. FIG.2illustrates example static and moving locations of user equipment, and example network measurements reported to network nodes of a wireless communication system, in accordance with various aspects and embodiments of the subject disclosure.FIG.2includes example network nodes211and212and various example locations within a geographical region surrounding the network nodes211and212. The locations represent estimated locations visited by user equipment at different times T1, T2, T3, T4, T5, T6, T7, T8 (the user equipment itself is not illustrated inFIG.2for simplicity of illustration). The estimated locations include “static” locations, at which the user equipment remained for a “long” period of time, e.g., for 15 minutes or longer, and “moving” locations, visited by the user equipment as the user equipment was in motion, e.g., along the illustrated route240. The estimated locations comprise T1 static location201, T2 moving location202, T3 moving location203, T4 moving location204, T4 adjusted moving location205, T5 moving location206, T6 moving location207, T7 moving location208, T8 static location209, and T8 adjusted static location210. InFIG.2, the estimated T4 moving location204of the user equipment at time T4 can be adjusted according to techniques disclosed herein, thereby identifying T4 adjusted moving location205. Similarly, the estimated T8 static location209of the user equipment at time T8 can be adjusted according to techniques disclosed herein, thereby identifying T8 adjusted static location210. The T4 adjusted moving location205and the T8 adjusted static location210can have higher accuracy than the initially estimated locations204,209. The adjusted locations205,210can be stored along with other estimated location information201,202,203,206,207, and208, to achieve improved location information associated with the user equipment. The resulting higher accuracy location information can be used for any desired application, e.g., for network planning or any other application. In general, the techniques disclosed herein can include obtaining a time series of network measurement data associated with the user equipment. The time series of network measurement data can include network measurement data collected at multiple different times, e.g., collected at each of the illustrated times T1, T2, T3, T4, T5, T6, T7, T8.FIG.2illustrates collection of example T1 network measurements221via network node211, collection of example T4 network measurements222via network node212, and collection of example T8 network measurements223via network node211. The network nodes211and212, or any other network nodes serving the user equipment, can similarly collect network measurements at the other times T2, T3, T5, T6, and T7. Network measurement data such as T1 network measurements221, T4 network measurements222, and T8 network measurements223can be provided to network equipment, e.g. network equipment included in the communication service provider network(s)106illustrated inFIG.1, and the network equipment can process the network measurement data according to the techniques disclosed herein. Example network equipment and operations thereof are described further in connection withFIGS.3-18. As will be described in further detail with reference toFIGS.3-18, processing of network measurement data according to this disclosure can generally include sorting the network measurement data into data associated with static locations, such as T1 static location201and T8 static location209, and data associated with moving locations, such as moving locations202-208. Separate processing techniques can then be applied to the static location data and the moving location data, followed by merge operations wherein static location information is used to improve moving location information, and vice versa. Furthermore, processing techniques applied to moving location data associated with moving locations202-208can include, inter alia, separating the moving location data into data associated with different segments231and232. The different segments231and232can be associated with different user equipment travel speeds. The moving location data associated with each segment231and232can be processed independently, followed by merge operations wherein location information associated with the segments231and232can be joined with location information from other segments as well as static location information. FIG.3illustrates example network equipment configured to perform mobility mode identification of static and moving user equipment mobility modes, in accordance with various aspects and embodiments of the subject disclosure.FIG.3includes example network equipment300, wherein the example network equipment300can be included in communication service provider network(s)106illustrated inFIG.1, and wherein the example network equipment300can furthermore obtain network measurement data such as described with reference toFIG.2. The network equipment300includes network measurement data store350, general geotagging output301, UE active session split302, eliminate duplicate patterns303, mobility mode identification304, static smoothing305, update high frequency static pattern306, moving smoothing307, update high frequency route pattern308, merge landmarks309, and UE routing output310. In general, with reference toFIG.3, the network equipment300can be configured to use UE active session split302, eliminate duplicate patterns303, and mobility mode identification304to identify network measurement data associated with static UE locations, and to identify network measurement data associated with moving UE locations. The network equipment300can then use static smoothing305and update high frequency static pattern306to process network measurement data associated with the static UE locations, and the network equipment300can use moving smoothing307and update high frequency route pattern308to process network measurement data associated with the moving UE locations. The network equipment300can then use merge landmarks309to further improve UE location information, and the network equipment300can produce the UE routing output310comprising adjusted/improved UE location information. In an aspect,FIG.3provides a framework to improve UE routing with the measurements reported by cells and eNodeBs in a telecommunications network. Methods can utilize both the network measurement patterns and time-sequences of UE locations estimated by general online geotagging processes. Methods can first determine the mobility status of a UE across various timestamps, i.e., whether the UE is static or moving. Methods can then split a UE route into multiple mobility periods, within each of which the UE mobility status is substantially unchanged. For each mobility period, methods can apply suitable static/moving processing to further improve the location estimates. Methods can employ this divide and conquer approach in part because measurement noise characteristics tend to be quite different across static and moving modes. Finally, methods can combine the estimates from the various mobility periods together for a complete route. Methods can provide the capability to identify UE mobility status and improve overall accuracy for 5G and future wireless network geolocation. In another aspect,FIG.3provides a framework for UE offline routing which utilizes the time sequence of wireless network measurements and estimated UE locations via general geotagging processes. Embodiments can learn the time series of network measurement patterns and build databases to reinforce the learning to determine the mobility status of UEs effectively. Mobility mode identification304can split UE routes into moving periods and static periods. Network equipment300can then apply different (static/moving) smoothing functions305,307to corresponding mobility periods. Embodiments can furthermore design the reference points to exchange between different mobility periods which helps form seamless routes when concatenating the location estimates from such periods. Network equipment300can enable improved UE routing in a telecommunication network. Embodiments can smooth individual UE routes by using a combination of primary location estimates based on general geolocation technologies, and secondary location estimates based on learned time series measurement patterns to remove outliers. In some embodiments, a UE route can be represented using locations where there is a change in the UE network measurement pattern. This reduces the computational effort by eliminating redundant location estimation calculations. Embodiments can furthermore identify the UE mobility status (whether static or moving), based on the UE network measurement patterns across various time periods. Network equipment300can utilize databases to store frequently observed network measurement patterns corresponding to static modes and routes for specific UEs. Network equipment300can apply appropriate smoothing algorithms adapted to the UE mobility status. For example, network equipment300can reuse the patterns stored in the database when the UE is inferred to be static. Network equipment300can combine the estimates from the various time periods (where the UE could be in multiple mobility modes) to recover the complete UE route. FIG.4is a flow diagram representing example operations of network equipment to perform mobility mode identification of static and moving user equipment mobility modes, in accordance with various aspects and embodiments of the subject disclosure. The illustrated operations can be performed, e.g., by network equipment300such as illustrated inFIG.3. At acquire data402, network equipment300can acquire time-sequences of historical geotagged call trace data. For example, network equipment300can obtain general geotagging output301from network measurement data store350. The general geotagging output301can include, but need not be limited to, historical geotagged call trace data comprising at least one of international mobile subscriber identity (IMSI) information, timestamp information, timing advance information, signal strength information, serving cell information, estimated latitude information, estimated longitude information, or geotagging type information. The UE active session split302can be configured to identify different periods of UE activity within general geotagging output301, and eliminate duplicate patterns303can be configured to eliminate duplicates in order to decrease the volume of data to be processed. At mobility mode identification404, network equipment300can be configured to apply UE mobility mode identification304to the UE network measurements to mark each record as static or moving. UE mobility mode identification304can be configured to use this indicator to split each UE's measurements time series into static periods and moving periods. At stabilize static location estimates406, network equipment300can be configured to apply static smoothing305to stabilize the estimated locations in each static period. At stabilize moving location estimates408, network equipment300can be configured to apply moving smoothing307to remove outliers and estimate a robust route for each of the moving periods. At apply static and moving labels410, network equipment300can be configured to use update high frequency static pattern306and update high frequency route pattern308, respectively, to apply labels such as “static pattern”/“moving pattern” labels to network measurement/estimated location pairs. Network equipment300can be configured to feed this data back to UE mobility mode identification304and static/moving smoothing305,307to refine the patterns over time, enabling process speed up. At combine estimated locations412, network equipment300can be configured to combine the estimated locations from the various periods to generate a complete time-series of location estimates for each UE. FIG.5is a flow diagram representing example operations of network equipment to prepare a time series of historical geotagged call trace data, in accordance with various aspects and embodiments of the subject disclosure. The illustrated operations can be performed, e.g., by network equipment300such as illustrated inFIG.3. At obtain geotagging time series502, network equipment300can be configured to obtain general geotagging output301comprising a time series of call trace records. The output301can contain the network measurements and estimated location information such as Ut=(IMSI, timestamp, network measurements, EST_LAT, EST_LON, alg_info), where alg_info indicates the geotagging technology used and the associated accuracy, EST_LAT is an estimated latitude, and EST_LON is an estimated longitude. Further network measurements can include, e.g., global connectivity index (GCI), timing advance (TA), reference signal received power (RSRP), reference signal received power (RSRQ), or other measurement information. At split time series into active sessions504, for a given UE, network equipment300can apply UE active session split302to split the time series into active sessions based on the record timestamps. This can include sorting the records in a given time series by timestamp; including records within a specified time interval, δt(for example, δt=10 minutes), of the previous timestamp, into a same active session; and otherwise, generating a new active session starting with a given record. At remove redundant records506, within a given active session, network equipment300can be configured to run eliminate duplicate patterns303to remove redundant UE records. This can include, e.g., ignoring the timestamp, and comparing each record information with a previous record. If the record information is duplicated, then remove the current record. This method keeps new information in a time series, thus reducing the computational burden. In real-world networks, network measurements can repeat many times over and the corresponding location estimates can be identical as well. The number of records can be reduced significantly, without sacrificing the accuracy of later steps. FIG.6is a flow diagram representing example operations of network equipment to apply mobility mode identification, in accordance with various aspects and embodiments of the subject disclosure. The illustrated operations can be performed, e.g., by network equipment300such as illustrated inFIG.3. At static pattern inference602, for each record, mobility mode identification304can compare the network measurements (such as serving cell and timing advance) with a UE's known static patterns. The known static pattern can be based on a long time history (e.g., months) of UE call trace records. The locations frequently visited by a UE (also referred to herein as frequently visited places or FVPs) can be identified and estimated with corresponding serving cell and TA values. Since the UE is static at those FVP locations, the associated measurements (IMSI, serving cell, TA) patterns, if encountered in the future, can be used to infer that the UE is static. FVP patterns comprise an initial set of UE static patterns. If matched, mobility mode identification304can mark the record with a ‘static’ tag. Otherwise, mobility mode identification304can proceed to the next step. At long duration inference604, mobility mode identification304can apply long duration inferences. For example, mobility mode identification304can check the duration (the difference between a timestamp of a next change and a current timestamp) of a network measurement. If the duration is larger than a predetermined value, e.g., a “static_cutoff” value such as 10 minutes, the record can be marked with a ‘static’ tag. Otherwise mobility mode identification304can proceed to the next step. At static interpolation606, mobility mode identification304can perform static interpolation. If the duration is short, mobility mode identification304can calculate a static gap as a difference between a timestamp of a next tagged static record and a timestamp of a previous tagged static record. If static gap is smaller than a predetermined value, e.g., a “static_cutoff2” such as 5 minutes, the record can be marked with a ‘static’ tag since the UE is static right before and after. At moving inference608, mobility mode identification304can perform moving inference. Mobility mode identification304can mark records without a ‘static’ tag as ‘moving’. At moving interpolation610, mobility mode identification304can then perform moving interpolation, wherein mobility mode identification304can re-evaluate records tagged as ‘static’. For each such record, mobility mode identification304can calculate the moving gap as the difference between a timestamp of a next tagged moving record and a timestamp of a previous tagged static record. If moving gap is smaller than a predetermined value, e.g., a “moving cutoff” value such as 5 minutes, mobility mode identification304can revise the tag of this record as ‘moving’ since the UE is moving right before and after. At static-mobility split612, mobility mode identification304can perform a static-mobility split in which mobility mode identification304splits an active session time series into static and moving periods based on the tag of each record, by adding consecutive records tagged similarly to the same period if the time gap is small, such as 10 minutes or less. FIG.7is a flow diagram representing example operations of network equipment to estimate UE routes, in accordance with various aspects and embodiments of the subject disclosure. The illustrated operations can be performed, e.g., by network equipment300such as illustrated inFIG.3. At refine static location estimates702, for UE records in a static period, network equipment300can apply static smoothing305to refine the location estimates. The output can be represented, e.g., as (IMSI, timestamp, network measurements, EST_LAT_static, EST_LON_static). When a specific (IMSI, network measurements) pattern is observed with high frequency over a long history, the “update high frequency static pattern” component306can mark the pattern as a high frequency static pattern, and the pattern can be stored with location indicated as (EST_LAT_static, EST_LON_static). This location estimate can be applied when the pattern is observed in the static periods of future time series. At refine moving location estimates704, for UE records in a moving period, network equipment300can apply moving smoothing307to improve the locations estimated in the period, to form a practical route. The corresponding output can be represented as (IMSI, timestamp, network measurements, EST_LAT_moving, EST_LON_moving). If such a moving pattern (IMSI, network measurements) is observed repeatedly over the historical trace of the UE, then update high frequency route pattern308can store the pattern with locations (EST_LAT_moving, EST_LON_moving), in order to guide UE mobility mode identification304and moving smoothing307in future session time series. At share border location estimates706, records at the border of a “mobility mode” change can be shared between static smoothing305and moving smoothing307. For example, for a moving period, the static records right before and/or after this period can be added to the moving period time series as reference points. Moving smoothing307need not change the location estimation of those static points but can use those locations to regulate the estimated points when UE is moving. Similarly, the moving locations right before and/or after a static period can also be included and referred by static smoothing305. At retrieve active session landmarks708, merge landmarks309can be configured to combine estimated locations from static smoothing305and moving smoothing307to retrieve active session landmarks. Landmarks can be represented as (IMSI, timestamp, network measurements, EST_LAT*, EST_LON*), where EST_LAT*=EST_LAT_static or EST_LAT_moving and EST_LON*=EST_LON_static or EST_LON_moving. At interpolate non-landmark records710, for records in an original active session time series but not in landmarks, merge landmarks309can interpolate the UE locations based on timestamps between landmarks. FIG.8illustrates example network equipment configured to estimate static user equipment locations based on network measurement data, in accordance with various aspects and embodiments of the subject disclosure.FIG.8includes example network equipment800, wherein the example network equipment800can optionally process network measurement data associated with static user equipment locations. As such, aspects of network equipment800can optionally be included in the network equipment300described with reference toFIG.3, and vice versa. For example, in some embodiments, network equipment800can implement, inter alia, the static smoothing305illustrated inFIG.3. The network equipment800includes network measurement data store350, geotagged time series801, mobility mode identification304, geotagging accuracy filter802, location estimates candidate set803, reliability calculation804, geotagged static output805, static patterns data store806, and FVP static patterns807. In general, with reference toFIG.8, the network equipment800can be configured to identify static UEs and estimate their location accurately using measurements reported by cells and eNodeB s in a telecommunications network. The network equipment800can utilize network measurement patterns learned over time to identify static UEs and select the most reliable location estimates. The network equipment800can therefore implement a framework to identify and geolocate static UEs in a telecommunication network. The network equipment800can utilize historical UE network measurement data to learn measurement patterns associated with static UEs, which can be stored in a static pattern database such as FVP static patterns807. Given new UE network measurement data observed over shorter timespans, the network equipment800can use the learned static pattern information to determine whether the UE is static or not, and to estimate UE location during the static time periods. The network equipment800can estimate locations within a given static time period together, based on a derived reliability measure. FIG.9is a flow diagram representing example operations of network equipment to estimate static user equipment locations based on network measurement data, in accordance with various aspects and embodiments of the subject disclosure. The illustrated operations can be performed, e.g., by network equipment800such as illustrated inFIG.8. At acquire data902, the network equipment800can be configured to acquire a time series of historical geotagged call trace data such as geotagged time series801, which can be obtained for example from network measurement data store350. The geotagged time series801can include, but need not be limited to, IMSI, timestamp, timing advance, signal strength, serving cell, estimated latitude, estimated longitude, and geotagging type information. The geotagging type information can include, e.g., identifications of geotagging methodologies used for geotagging, and associated accuracy information. At mobility mode identification904, the network equipment800can next apply mobility mode identification304to each record within geotagged time series801, in order to determine whether a UE is static or in motion. Based on the determinations, mobility mode identification304can extract the periods where the UE is static. At derive candidate set of estimated static locations906, within each static period, the network equipment800can derive a candidate set of estimated locations based on geotagging accuracy. At calculate reliability of candidate locations908, reliability of each candidate location can be calculated based on network measurements observed within that static time period. A subset of “most reliable” location estimates can be determined, optionally according to pre-specified rules. These reliable location estimates can be assigned to the call trace records in the static period. At static landmark merge910, network equipment800can next apply a static landmark merge algorithm to identify incorrectly tagged records, such as static records tagged as moving records due to measurements observed with large noise. The network equipment800can merge consecutive static periods to improve the estimation accuracy. At apply static pattern labels and refine912, the network equipment800can be configured to apply “static pattern” labels to network measurement/estimated location pairs corresponding to the “most reliable” location estimates. The network equipment800can feed this data back to the UE mobility mode identification304and, e.g. to the static smoothing305illustrated inFIG.3, to refine the patterns over time, enabling the process to speed up over time. The network equipment800can include mobility mode identification304, introduced inFIG.3. Example operations of mobility mode identification304are previously described with referenceFIG.6. FIG.10illustrates example recurring network measurement data that can be used to identify static user equipment, in accordance with various aspects and embodiments of the subject disclosure.FIG.10includes a time series of network measurement data associated with a UE.FIG.10includes a timeline comprising different cell and TA values associated with the UE. The cell and TA values include, e.g., CELL2, TA2, followed by CELL1, TA1, followed by CELL2, TA2, followed by CELL1, TA1, followed by CELL5, TA3, followed by CELL1, TA1, followed by CELL1, TA1, followed by CELL4, TA4, followed by CELL5, TA5, followed by CELL1, TA1. An example CELL2, TA2 recurrent period1002comprises recurring instances of CELL2, TA2. An example CELL1, TA1 recurrent period1004comprises recurring instances of CELL1, TA1. FIG.11is a flow diagram representing example operations of network equipment to perform a static recurrent pattern identification process, in accordance with various aspects and embodiments of the subject disclosure.FIG.11can be understood by reference toFIG.10, and can be performed for example by network equipment800illustrated inFIG.8. At check recurrent time interval1102, for a UE observed (cell, TA), network equipment800can check a recurrent time interval, defined as the time difference between a next time minus a current time of observing a same (cell, TA). If the recurrent time interval is less than or equal to a predetermined value, e.g. a recurrent_session_cutoff value such as 10 minutes, then network equipment800can consider the UE to be in a same recurrent period and can assign a recurrent period identifier recurrent_period_id. At group recurrent records1104, network equipment800can be configured to group by (recurrent_period_id, cell, TA), and compute a max_time of max(timestamp), a min_time of min(timestamp), and a corresponding period duration, defined as recurrent_duration=max_time−min_time. If recurrent_duration is greater than or equal to a predetermined recurrent_time_cutoff value such as 4 minutes, network equipment800can mark the whole recurrent period (recurrent_period_id, min_time, max_time) as a ‘static’ recurrent period. At apply static labels1106, network equipment800can be configured to check if, for any UE records, the timestamp is within any static recurrent period (recurrent_period_id, min_time, max_time). If yes, network equipment800can mark the timestamp as ‘static’. FIG.12is a flow diagram representing example operations of network equipment to perform a static geolocation process, in accordance with various aspects and embodiments of the subject disclosure. The illustrated operations can be performed, e.g., by network equipment800such as illustrated inFIG.8. At identify candidate location set1202, network equipment800can determine a candidate location set. For each geolocation estimate in the geotagged time series of call trace records from a given static period, geotagging accuracy filter802can identify the geotagging method. Example geotagging methods may include, but are not limited to, fingerprinting, FVP geotagging, and handover arc intersection calculation. If the accuracy of a geotagging method identified by geotagging accuracy filter802is within an acceptable range (e.g. median of 100 meters or less, 75% 200 meters or less), then geotagging accuracy filter802can add the estimated location to the location estimates candidate set803. If no geolocation method has accuracy within the acceptable range, then geotagging accuracy filter802can pick a location with a best relative accuracy and add it to the location estimates candidate set803. Optionally, geotagging accuracy filter802can confirm that estimated static locations are within a reasonable range of moving UE locations occurring immediately before and after a current static time period, considering the speed of motion required to traverse the distance between those locations. At location estimation1204, network equipment800can perform location estimation. If the candidate set803has a single element, then network equipment800can use it as the location estimate for up to every record within a static period. If there are multiple points in the candidate set803, then network equipment800can use a suitable interpolation technique (e.g., linear, spline regression, median calculation) to derive a location estimate for the records in a static period. At reliability calculation and pattern generation1206, reliability calculation804can process the candidate set803. For each location in the candidate set803and each record in the static period time series, reliability calculation804can compute: (1) a distance difference defined as |UE to cell distance/78−TA|; and (2) an azimuth gap defined as |Cell to UE azimuth−Cell azimuth|. Furthermore, for each location in the candidate set803, reliability calculation804can count the number of records where the distance difference is smaller than a distance cutoff (e.g. 2) and the azimuth gap is smaller than an azimuth cutoff (e.g.90). If the resulting count is high (e.g. three or more), then reliability calculation804can mark a location along with the corresponding network measurement (IMSI, serving cell, TA, est_lat, est_lon) as highly reliable. Furthermore, if a reliable location is newly observed, reliability calculation804can add it to the static patterns data store806. The static pattern data store806can include patterns learned using multiple different methods, and can also include FVP static pattern807. The static pattern data store806can enable mobility mode identification304as well as UE geolocation estimation when a relevant pattern is observed within a static time period. At location estimation within static time period1208, network equipment800can perform location estimation within a static time period, in order to generate geotagged static output805. Network equipment800can pick a location estimate ranked first with highest count, optionally breaking ties using the average of distance difference and azimuth gap. Network equipment800can apply the selected location estimate for: either all records within the static period, or only those records where the corresponding location estimates are not included in the reliable location estimate set. At static landmark merge1210, a static landmark merge process can merge some static periods and re-run operations, starting with geotagging accuracy filter802, for changed static periods. In an example static landmark merge process, first, if any two consecutive static periods with estimated locations within a predetermined distance_cutoff value, such as 50 meters, and the time gap between the two periods is within a predetermined merge_cutoff value, such as 10 minutes, then the moving period between these two static sessions can be changed to be static. Second, the static landmark merge process can merge the three periods into a single static period. The first and second operations can be applied to all periods and merges can be performed if necessary. Afterwards, the static location process can optionally be re-run. FIG.13illustrates example network equipment configured to estimate moving user equipment locations based on network measurement data, in accordance with various aspects and embodiments of the subject disclosure.FIG.13includes example network equipment1300, wherein the example network equipment1300can optionally process network measurement data associated with moving user equipment locations. As such, aspects of network equipment1300can optionally be included in the network equipment300described with reference toFIG.3, and vice versa. For example, in some embodiments, network equipment1300can implement the moving smoothing307illustrated inFIG.3. The network equipment1300includes network measurement data store350, geotagged time series1301, mobility mode identification304, geotagging comparison1302, location estimates weight assignment1303, weighted smoothing1304, snap routes on road1305, geotagged moving UE output1306, moving patterns data store1308, and geographic clutter information1307. In general, with reference toFIG.13, the network equipment1300can be configured to implement a framework for moving UE identification and route estimation with the measurements reported by cells and eNodeBs in the telecommunications network. Methods can utilize both the network measurement patterns and time-sequences of UE locations estimated by general online geotagging technologies. In a first step, mobility mode identification304can be used to determine a time period where the UE is in motion. Subsequently, a moving location refinement process can be applied to the geolocation estimates from such moving time periods. The moving location refinement process can remove outliers and smooth the overall route. This process can consider the geotagging method/accuracy of individual location estimates as well as further information, including geographic clutter (for example, road type) and speed of travel. In one proposed approach, embodiments according toFIG.13can include a framework to identify moving UEs and estimate their routes in a telecommunication network. Embodiments can extract time periods where the UE is in motion, from the time series of network measurements, via a UE “mobility mode” identification process. Embodiments can furthermore apply a smoothing processing to remove outliers and estimate a robust route based on geographic clutter (road types) information. A “moving pattern” label can be applied to network measurement/estimated location pairs. This data can then be fed back to the UE mobility mode identification process and smoothing processing to refine the patterns over time, enabling process speed up over time. FIG.14is a flow diagram representing example operations of network equipment to estimate moving user equipment locations based on network measurement data, in accordance with various aspects and embodiments of the subject disclosure. The illustrated operations can be performed, e.g., by network equipment1300such as illustrated inFIG.13. At acquire data1402, network equipment1300can acquire time-sequences of historical geotagged call trace data, such as geotagged time series1301. Geotagged time series1301can include, but is not limited to, IMSI, timestamp, timing advance, signal strength, serving cell, estimated latitude, estimated longitude, and geotagging type information. The geotagging type information can include, e.g., identifications of geotagging methodologies used for geotagging, and associated accuracy information. At mobility mode identification1404, network equipment1300can be configured to apply mobility mode identification304to the network measurements acquired at1402, to mark each record UE's status as static or moving. Based on the status, mobility mode identification304can form UE moving periods. At extract moving time periods1406, network equipment1300can be configured to extract the time periods where the UE is in motion. For each record within such periods, location estimates weight assignment1303can calculate weights based on the estimation accuracy of the geotagging method and the clutter type associated with the estimated location and the implied UE speed. At smoothing processing1408, weighted smoothing1304can apply a robust smoothing process wherein the weights calculated operation1406can be applied to individual records. The resulting route can be snapped to roads, if applicable. At apply moving pattern labels and refine1410, network equipment1300can apply a “moving pattern” label to each network measurement/estimated location pair. Network equipment1300can feed this data back to the UE mobility mode identification304and moving UE route smoothing processing1304to refine the patterns over time, enabling process speed up. The network equipment1300can include mobility mode identification304, introduced inFIG.3. Example operations of mobility mode identification304are previously described with referenceFIG.6. FIG.15is a flow diagram representing example operations of network equipment to perform moving smoothing adjustments of estimated locations, in accordance with various aspects and embodiments of the subject disclosure. The illustrated operations can be performed, e.g., by network equipment1300such as illustrated inFIG.13. At weight assignment1502, geotagging comparison1302and location estimates weight assignment1303can assign weights to location estimates. Operations associated with weight assignment1502can include assessing geotagging method accuracy, clutter matching, and optionally, confirmation of starting and ending locations. In order to assess geotagging method accuracy, for each geolocation estimate in the time series of geotagged call trace records from a given moving period, geotagging comparison1302can identify the geotagging technique. Examples of geotagging techniques include, but are not limited to, fingerprinting, FVP geotagging, and handover arc intersection calculation. Geotagging comparison1302can form a set of candidate locations, comprising estimates located by geotagging techniques with accuracy within an acceptable range, e.g. with median less than or equal to 500 meters. Location estimates weight assignment1303can assign weights to each location estimate based on the geotagging technique accuracy. In order to perform clutter matching, location estimates weight assignment1303can estimate the UE speed based on prior location estimates. For example, the speed can be estimated as the distance between consecutive geolocation estimates divided by the interval between the corresponding timestamps. Location estimates weight assignment1303can then compare the estimated speed with a clutter type associated with the location estimate. The clutter type can be retrieved from geographic clutter information1307. Location estimates weight assignment1303can assign higher weights to those location estimates where the clutter type is in alignment with the speed level—for example, “primary road” clutter type and speed estimate in excess of 50 mph. If there is a match discrepancy, then location estimates weight assignment1303can assign lower weights to those location estimates. In order to confirm starting and ending locations, location estimates weight assignment1303can confirm that the estimated starting and ending locations are within a reasonable range of static UE locations immediately before and after a current moving time period. Embodiments can utilize, for example the speed of motion required to traverse the distance between those locations. At smoothing processing1504, weighted smoothing1304can divide a time period where a UE is in motion into multiple segments so that each segment corresponds to a limited set of serving cells (e.g. no more than 10) and/or a similar speed level of the UE based on the time interval between points. Within each segment, weighted smoothing1304can perform weighted smoothing processing (for example, spline regression) to smooth the location estimates. Optionally, weighted smoothing1304can include static locations before and after a moving period as reference points. Furthermore, within each segment, weighted smoothing1304can apply a robust method such as bootstrapping to remove the impact of outliers on the smoothing. For example, weighted smoothing1304can randomly select a subset of location estimates and repeat, starting from division into segments. Weighted smoothing1304can identify outliers, e.g., original estimated locations that are far away from smoothened locations. Weighted smoothing1304can remove these outliers and redo weighted smoothing processing on the remaining points. Weighted smoothing1304can apply the smoothing model to the timestamp of outliers to estimate UE locations. Furthermore, within each segment, weighted smoothing1304can include the location estimates right before and after the segment to form a reasonable continuous route. After processing the segments, weighted smoothing1304can concatenate the smoothed location estimates from multiple segments, for example, weighted smoothing1304can concatenate the smoothed location estimates from up to all segments. After the concatenation, weighted smoothing1304can evaluate smoothened estimated location distances from original estimated locations. If the distance is small, e.g., smaller than a predetermined distance, weighted smoothing1304can mark the smoothened location as a reliable estimated point. If the distance is large, e.g., larger than the predetermined distance, weighted smoothing1304can use interpolation of nearby reliable smoothened points as the final smoothed estimation. At pattern generation1506, network equipment1300can store patterns in moving patterns data store1308. Each record denoted by (IMSI, serving cell, TA, est_lat, est_lon) can form a pattern. Network equipment1300can maintain a tally of the number of times each pattern is observed. If the number of observations of a given pattern exceeds a certain threshold (N), then (if it doesn't already exist in the data store) network equipment1300can add it to the moving patterns data store1308. At snap to road1508, snap routes on road1305can optionally snap smoothed location estimates to road topology. Based on the smoothed estimated locations, snap routes on road1305can recalculate the UE speed. Using the geographical clutter information1307, snap routes on road1305can identify a closest road segment corresponding to each estimated location, which matches the UE speed level. Snap routes on road1305can then adjust locations to snap the estimated locations onto the roads. If any sub-segment of the resulting route is deemed implausible, then network equipment1300can rerun the weighted smoothing1304on that sub-segment. Snap routes on road1305can repeat its operations until the route segment aligns with the geographical clutter data. Results of the operations according toFIG.15can comprise geotagged moving UE output1306. FIG.16is a flow diagram representing example operations of network equipment to adjust estimated user equipment location during a moving period, in accordance with various aspects and embodiments of the subject disclosure. The illustrated blocks can represent actions performed in a method, functional components of a computing device, or instructions implemented in a machine-readable storage medium executable by a processor. While the operations are illustrated in an example sequence, the operations can be eliminated, combined, or re-ordered in some embodiments. The operations illustrated inFIG.16can be performed, for example, by network equipment1300such as illustrated inFIG.13. Example operation1602comprises obtaining, by network equipment1300comprising a processor, a time sequence of network measurement data associated with a user equipment, such as geotagged time series1301. Geotagged time series1301can include, e.g., historical geotagged call trace data comprising at least one of international mobile subscriber identity information, timestamp information, timing advance information, signal strength information, serving cell information, estimated latitude information, estimated longitude information, or geotagging type information. Example operation1604comprises identifying, by the network equipment1300, within the time sequence of network measurement data1301, a moving period in which the user equipment was moving. For example, mobility mode identification304can identify moving periods. Example operation1606comprises identifying, by the network equipment1300, based on network measurement data associated with the moving period, a group of candidate locations of the user equipment during the moving period. Identifying the group of candidate locations of the user equipment during the moving period can comprise assigning weights to location information included in the network measurement data associated with the moving period. The weights can be correlated with respective geotagging methods used to generate the location information. Furthermore, in some embodiments, the group of candidate locations of the user equipment during the moving period can comprise a static user equipment location, and the static user equipment location can be, e.g., a start location at a beginning of the moving period or an end location at an end of the moving period. Example operation1608comprises estimating, by the network equipment1300, based on first candidate locations of the group of candidate locations, a first user equipment route segment associated with a first user equipment speed. Example operation1610comprises estimating, by the network equipment1300, based on second candidate locations of the group of candidate locations, a second user equipment route segment associated with a second user equipment speed, wherein the second user equipment speed is different from the first user equipment speed. Example route segments231,232are illustrated inFIG.2. Operation1608can include determining, by the network equipment1300, the first candidate locations and the first user equipment speed using distances between the first candidate locations and times associated with the first candidate locations, as well as determining, by the network equipment1300, the second candidate locations and the second user equipment speed using distances between the second candidate locations and times associated with the second candidate locations. Having determined route segments and speeds, the network equipment1300can confirm a start location and an end location associated with the moving period, based on the various segments and speeds, e.g., the first user equipment speed and the second user equipment speed. Example operation1612comprises adjusting, by the network equipment1300, the first candidate locations based on the first user equipment route segment and the first user equipment speed, resulting in first adjusted locations. Adjusting the first candidate locations based on the first user equipment route segment and the first user equipment speed can comprise, e.g., removing an outlier location from the first candidate locations and/or otherwise adjusting the first candidate locations in order to smooth the first user equipment route segment. Example operation1614comprises comparing, by the network equipment1300, the first candidate locations and the first user equipment speed with a first clutter type associated with the first user equipment route segment, and adjusting weights of the first candidate locations based on a first comparison result. For example, higher weights can be assigned when speeds are within a range of expected speed associated with a clutter type, and lower weights can be assigned when speeds for speeds that are farther from expected speeds associated with a clutter type. Example operation1616comprises adjusting, by the network equipment1300, the second candidate locations based on the second user equipment route segment and the second user equipment speed, resulting in second adjusted locations. Similar to the first segment, the first candidate locations can be adjusted by, e.g., removing an outlier location from the second candidate locations and/or otherwise adjusting the second candidate locations in order to smooth the second user equipment route segment. Example operation1618comprises comparing, by the network equipment1300, the second candidate locations and the second user equipment speed with a second clutter type associated with the second user equipment route segment, and adjusting weights of the second candidate locations based on a second comparison result. Similar to operation1614, higher weights can be assigned when speeds are within a range of expected speed associated with a clutter type, and vice versa. Example operation1620comprises storing, by the network equipment1300, the first adjusted locations and the second adjusted locations as user equipment locations during the moving period. The adjusted locations from all segments can be concatenated, optionally along with static locations, to form an accurate user equipment location history that can be used for network planning or any other applications. FIG.17is a flow diagram representing example operations of network equipment to estimate a route segment within a moving period, and adjust moving period locations within the route segment, in accordance with various aspects and embodiments of the subject disclosure. The illustrated blocks can represent actions performed in a method, functional components of a computing device, or instructions implemented in a machine-readable storage medium executable by a processor. While the operations are illustrated in an example sequence, the operations can be eliminated, combined, or re-ordered in some embodiments. The operations illustrated inFIG.17can be performed, for example, by network equipment1300such as illustrated inFIG.13. Example operation1702comprises obtaining a time sequence of network measurement data associated with a user equipment, e.g., geotagged time series1301, and identifying within the time sequence of network measurement data, e.g., by mobility mode identification304, network measurement data associated with a moving period of the user equipment. Example operation1704comprises identifying, based on network measurement data associated with the moving period of the user equipment, a group of candidate locations of the user equipment during the moving period. Example operation1706comprises estimating, based on candidate locations of the group of candidate locations, a user equipment route segment associated with a user equipment speed during at least a part of the moving period. The candidate locations and the user equipment speed can be determined, e.g., using distances between the candidate locations and times associated with the candidate locations. Furthermore, the candidate locations and the user equipment speed can optionally be compared with a clutter type associated with the user equipment route segment, and weights of the candidate locations can be adjusted based on a comparison result, as described herein. Example operation1708comprises estimating, based on other candidate locations of the group of candidate locations, other user equipment route segments associated with other user equipment speeds during other parts of the moving period, other than at least the part of the moving period identified pursuant to operation1704. Multiple different segments232,232can be calculated such as illustrated inFIG.2. Example operation1710comprises adjusting the candidate locations based on the user equipment route segment and the user equipment speed, resulting in adjusted locations. Adjusting the candidate locations based on the user equipment route segment and the user equipment speed can comprise, e.g., removing outlier locations from the candidate locations and using smoothing processing, as described herein. Example operation1712comprises storing the adjusted locations as user equipment locations during at least the part of moving period. Other adjusted locations associated with other segments can also be stored, and all the various adjusted locations can be concatenated as a user equipment location history. FIG.18is another flow diagram representing example operations of network equipment to estimate a route segment within a moving period, and adjust moving period locations within the route segment, in accordance with various aspects and embodiments of the subject disclosure. The illustrated blocks can represent actions performed in a method, functional components of a computing device, or instructions implemented in a machine-readable storage medium executable by a processor. While the operations are illustrated in an example sequence, the operations can be eliminated, combined, or re-ordered in some embodiments. The operations illustrated inFIG.18can be performed, for example, by network equipment1300such as illustrated inFIG.13. Example operation1802comprises identifying, based on network measurement data associated with a moving period of a mobile device, candidate locations of the mobile device during the moving period. Example operation1804comprises estimating, based on the candidate locations, a mobile device route segment associated with a mobile device speed during a part of the moving period. Example operation1806comprises determining the mobile device speed using distances between the candidate locations. Example operation1808comprises removing an outlier location from the candidate locations based on the mobile device route segment and the mobile device speed, resulting in adjusted locations. Example operation1810comprises adjusting the candidate locations based on the mobile device route segment and the mobile device speed, resulting in the adjusted locations. Adjusting the candidate locations based on the mobile device route segment and the mobile device speed can comprise, e.g., smoothing processing such as adjusting the candidate locations in order to reduce a distance between a candidate location and a line associated with the route segment. For example,FIG.2illustrates a line associated with route240, and adjustment of T4 moving location204toward the line. Example operation1812comprises storing the adjusted locations as mobile device locations during the part of moving period. Further operations described herein in connection with the various other figures, can also be performed in connection withFIG.18. FIG.19is a block diagram of an example computer that can be operable to execute processes and methods in accordance with various aspects and embodiments of the subject disclosure. The example computer can be adapted to implement, for example, any of the various network equipment described herein. FIG.19and the following discussion are intended to provide a brief, general description of a suitable computing environment1900in which the various embodiments of the embodiment described herein can be implemented. While the embodiments have been described above in the general context of computer-executable instructions that can run on one or more computers, those skilled in the art will recognize that the embodiments can be also implemented in combination with other program modules and/or as a combination of hardware and software. Generally, program modules include routines, programs, components, data structures, etc., that perform particular tasks or implement particular abstract data types. Moreover, those skilled in the art will appreciate that the methods can be practiced with other computer system configurations, including single-processor or multiprocessor computer systems, minicomputers, mainframe computers, IoT devices, distributed computing systems, as well as personal computers, hand-held computing devices, microprocessor-based or programmable consumer electronics, and the like, each of which can be operatively coupled to one or more associated devices. The illustrated embodiments of the embodiments herein can be also practiced in distributed computing environments where certain tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules can be located in both local and remote memory storage devices. Computing devices typically include a variety of media, which can include computer-readable storage media, machine-readable storage media, and/or communications media, which two terms are used herein differently from one another as follows. Computer-readable storage media or machine-readable storage media can be any available storage media that can be accessed by the computer and includes both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer-readable storage media or machine-readable storage media can be implemented in connection with any method or technology for storage of information such as computer-readable or machine-readable instructions, program modules, structured data or unstructured data. Computer-readable storage media can include, but are not limited to, random access memory (RAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), smart card, flash memory (e.g., card, stick, key drive) or other memory technology, compact disk (CD), compact disk read only memory (CD-ROM), digital versatile disk (DVD), Blu-Ray™ disc (BD) or other optical disk storage, floppy disk storage, hard disk storage, magnetic cassettes, magnetic strip(s), magnetic tape, magnetic disk storage or other magnetic storage devices, solid state drives or other solid state storage devices, a virtual device that emulates a storage device (e.g., any storage device listed herein), or other tangible and/or non-transitory media which can be used to store desired information. In this regard, the terms “tangible” or “non-transitory” herein as applied to storage, memory or computer-readable media, are to be understood to exclude only propagating transitory signals per se as modifiers and do not relinquish rights to all standard storage, memory or computer-readable media that are not only propagating transitory signals per se. Computer-readable storage media can be accessed by one or more local or remote computing devices, e.g., via access requests, queries or other data retrieval protocols, for a variety of operations with respect to the information stored by the medium. Communications media typically embody computer-readable instructions, data structures, program modules or other structured or unstructured data in a data signal such as a modulated data signal, e.g., a carrier wave or other transport mechanism, and includes any information delivery or transport media. The term “modulated data signal” or signals refers to a signal that has one or more of its characteristics set or changed in such a manner as to encode information in one or more signals. By way of example, and not limitation, communication media include wired media, such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. With reference again toFIG.19, the example environment1900for implementing various embodiments of the aspects described herein includes a computer1902, the computer1902including a processing unit1904, a system memory1906and a system bus1908. The system bus1908couples system components including, but not limited to, the system memory1906to the processing unit1904. The processing unit1904can be any of various commercially available processors. Dual microprocessors and other multi-processor architectures can also be employed as the processing unit1904. The system bus1908can be any of several types of bus structure that can further interconnect to a memory bus (with or without a memory controller), a peripheral bus, and a local bus using any of a variety of commercially available bus architectures. The system memory1906includes ROM1910and RAM1912. A basic input/output system (BIOS) can be stored in a non-volatile memory such as ROM, erasable programmable read only memory (EPROM), EEPROM, which BIOS contains the basic routines that help to transfer information between elements within the computer1902, such as during startup. The RAM1912can also include a high-speed RAM such as static RAM for caching data. The computer1902further includes an internal hard disk drive (HDD)1914(e.g., EIDE, SATA), one or more external storage devices1916(e.g., a magnetic floppy disk drive (FDD)1916, a memory stick or flash drive reader, a memory card reader, etc.) and an optical disk drive1920(e.g., which can read or write from a CD-ROM disc, a DVD, a BD, etc.). While the internal HDD1914is illustrated as located within the computer1902, the internal HDD1914can also be configured for external use in a suitable chassis (not shown). Additionally, while not shown in environment1900, a solid state drive (SSD) could be used in addition to, or in place of, an HDD1914. The HDD1914, external storage device(s)1916and optical disk drive1920can be connected to the system bus1908by an HDD interface1924, an external storage interface1926and an optical drive interface1928, respectively. The interface1924for external drive implementations can include at least one or both of Universal Serial Bus (USB) and Institute of Electrical and Electronics Engineers (IEEE) 1394 interface technologies. Other external drive connection technologies are within contemplation of the embodiments described herein. The drives and their associated computer-readable storage media provide nonvolatile storage of data, data structures, computer-executable instructions, and so forth. For the computer1902, the drives and storage media accommodate the storage of any data in a suitable digital format. Although the description of computer-readable storage media above refers to respective types of storage devices, it should be appreciated by those skilled in the art that other types of storage media which are readable by a computer, whether presently existing or developed in the future, could also be used in the example operating environment, and further, that any such storage media can contain computer-executable instructions for performing the methods described herein. A number of program modules can be stored in the drives and RAM1912, including an operating system1930, one or more application programs1932, other program modules1934and program data1936. All or portions of the operating system, applications, modules, and/or data can also be cached in the RAM1912. The systems and methods described herein can be implemented utilizing various commercially available operating systems or combinations of operating systems. Computer1902can optionally comprise emulation technologies. For example, a hypervisor (not shown) or other intermediary can emulate a hardware environment for operating system1930, and the emulated hardware can optionally be different from the hardware illustrated inFIG.19. In such an embodiment, operating system1930can comprise one virtual machine (VM) of multiple VMs hosted at computer1902. Furthermore, operating system1930can provide runtime environments, such as the Java runtime environment or the .NET framework, for applications1932. Runtime environments are consistent execution environments that allow applications1932to run on any operating system that includes the runtime environment. Similarly, operating system1930can support containers, and applications1932can be in the form of containers, which are lightweight, standalone, executable packages of software that include, e.g., code, runtime, system tools, system libraries and settings for an application. Further, computer1902can be enabled with a security module, such as a trusted processing module (TPM). For instance with a TPM, boot components hash next in time boot components, and wait for a match of results to secured values, before loading a next boot component. This process can take place at any layer in the code execution stack of computer1902, e.g., applied at the application execution level or at the operating system (OS) kernel level, thereby enabling security at any level of code execution. A user can enter commands and information into the computer1902through one or more wired/wireless input devices, e.g., a keyboard1938, a touch screen1940, and a pointing device, such as a mouse1942. Other input devices (not shown) can include a microphone, an infrared (IR) remote control, a radio frequency (RF) remote control, or other remote control, a joystick, a virtual reality controller and/or virtual reality headset, a game pad, a stylus pen, an image input device, e.g., camera(s), a gesture sensor input device, a vision movement sensor input device, an emotion or facial detection device, a biometric input device, e.g., fingerprint or iris scanner, or the like. These and other input devices are often connected to the processing unit1904through an input device interface1944that can be coupled to the system bus1908, but can be connected by other interfaces, such as a parallel port, an IEEE 1394 serial port, a game port, a USB port, an IR interface, a BLUETOOTH® interface, etc. A monitor1946or other type of display device can be also connected to the system bus1908via an interface, such as a video adapter1948. In addition to the monitor1946, a computer typically includes other peripheral output devices (not shown), such as speakers, printers, etc. The computer1902can operate in a networked environment using logical connections via wired and/or wireless communications to one or more remote computers, such as a remote computer(s)1950. The remote computer(s)1950can be a workstation, a server computer, a router, a personal computer, portable computer, microprocessor-based entertainment appliance, a peer device or other common network node, and typically includes many or all of the elements described relative to the computer1902, although, for purposes of brevity, only a memory/storage device1952is illustrated. The logical connections depicted include wired/wireless connectivity to a local area network (LAN)1954and/or larger networks, e.g., a wide area network (WAN)1956. Such LAN and WAN networking environments are commonplace in offices and companies, and facilitate enterprise-wide computer networks, such as intranets, all of which can connect to a global communications network, e.g., the internet. When used in a LAN networking environment, the computer1902can be connected to the local network1954through a wired and/or wireless communication network interface or adapter1958. The adapter1958can facilitate wired or wireless communication to the LAN1954, which can also include a wireless access point (AP) disposed thereon for communicating with the adapter1958in a wireless mode. When used in a WAN networking environment, the computer1902can include a modem1960or can be connected to a communications server on the WAN1956via other means for establishing communications over the WAN1956, such as by way of the internet. The modem1960, which can be internal or external and a wired or wireless device, can be connected to the system bus1908via the input device interface1944. In a networked environment, program modules depicted relative to the computer1902or portions thereof, can be stored in the remote memory/storage device1952. It will be appreciated that the network connections shown are example and other means of establishing a communications link between the computers can be used. When used in either a LAN or WAN networking environment, the computer1902can access cloud storage systems or other network-based storage systems in addition to, or in place of, external storage devices1916as described above. Generally, a connection between the computer1902and a cloud storage system can be established over a LAN1954or WAN1956e.g., by the adapter1958or modem1960, respectively. Upon connecting the computer1902to an associated cloud storage system, the external storage interface1926can, with the aid of the adapter1958and/or modem1960, manage storage provided by the cloud storage system as it would other types of external storage. For instance, the external storage interface1926can be configured to provide access to cloud storage sources as if those sources were physically connected to the computer1902. The computer1902can be operable to communicate with any wireless devices or entities operatively disposed in wireless communication, e.g., a printer, scanner, desktop and/or portable computer, portable data assistant, communications satellite, any piece of equipment or location associated with a wirelessly detectable tag (e.g., a kiosk, news stand, store shelf, etc.), and telephone. This can include Wireless Fidelity (Wi-Fi) and BLUETOOTH® wireless technologies. Thus, the communication can be a predefined structure as with a conventional network or simply an ad hoc communication between at least two devices. The above description includes non-limiting examples of the various embodiments. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the disclosed subject matter, and one skilled in the art can recognize that further combinations and permutations of the various embodiments are possible. The disclosed subject matter is intended to embrace all such alterations, modifications, and variations that fall within the spirit and scope of the appended claims. With regard to the various functions performed by the above described components, devices, circuits, systems, etc., the terms (including a reference to a “means”) used to describe such components are intended to also include, unless otherwise indicated, any structure(s) which performs the specified function of the described component (e.g., a functional equivalent), even if not structurally equivalent to the disclosed structure. In addition, while a particular feature of the disclosed subject matter may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application. The terms “exemplary” and/or “demonstrative” as used herein are intended to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent structures and techniques known to one skilled in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive—in a manner similar to the term “comprising” as an open transition word—without precluding any additional or other elements. The term “or” as used herein is intended to mean an inclusive “or” rather than an exclusive “or.” For example, the phrase “A or B” is intended to include instances of A, B, and both A and B. Additionally, the articles “a” and “an” as used in this application and the appended claims should generally be construed to mean “one or more” unless either otherwise specified or clear from the context to be directed to a singular form. The term “set” as employed herein excludes the empty set, i.e., the set with no elements therein. Thus, a “set” in the subject disclosure includes one or more elements or entities. Likewise, the term “group” as utilized herein refers to a collection of one or more entities. The terms “first,” “second,” “third,” and so forth, as used in the claims, unless otherwise clear by context, is for clarity only and doesn't otherwise indicate or imply any order in time. For instance, “a first determination,” “a second determination,” and “a third determination,” does not indicate or imply that the first determination is to be made before the second determination, or vice versa, etc. The description of illustrated embodiments of the subject disclosure as provided herein, including what is described in the Abstract, is not intended to be exhaustive or to limit the disclosed embodiments to the precise forms disclosed. While specific embodiments and examples are described herein for illustrative purposes, various modifications are possible that are considered within the scope of such embodiments and examples, as one skilled in the art can recognize. In this regard, while the subject matter has been described herein in connection with various embodiments and corresponding drawings, where applicable, it is to be understood that other similar embodiments can be used or modifications and additions can be made to the described embodiments for performing the same, similar, alternative, or substitute function of the disclosed subject matter without deviating therefrom. Therefore, the disclosed subject matter should not be limited to any single embodiment described herein, but rather should be construed in breadth and scope in accordance with the appended claims below.
94,254
11860290
DETAILED DESCRIPTION The technology described herein relates to using multiple antenna, where each antenna is receiving information from another source that is then calculated to determine a position of the multiple antenna. As shown inFIG.1, in some implementations, the multiple antenna are configured in a rotating antenna array100as described in more detail below.FIG.1Adepicts a perspective view of a rotating antenna array100. As shown inFIG.1A, the rotating antenna array100may include one or more antennas102a-102n(shown as102a,102b, and102cin this example). These antenna102may be attached to an antenna rotating boom108that allows each of the antenna102to rotate about a pivot point104. In some implementations, the pivot point104may act as a pivot point attachment to hold or retain the antenna rotating boom108and allow the antenna rotating boom108to rotate about the pivot point104. In some implementations, the rotating antenna array100may use a motor106or other movement device to cause the antenna rotating boom108to rotate and stop about the pivot point104and cause the antennas102a-102nto capture positional information as they rotate about the pivot point104and stop at various stopping points. In some implementations, as shown with respect toFIG.3, the antennas102a-102nmay each receive location information from one or more transmitting devices302a-302n(which may be wi-fi, Bluetooth, ultra wide-band, GPS, etc.) that are placed around a given area and the transmitting devices302a-302nmay each transmit a signal of their location to one or more of the antennas102a-102nof the rotating antenna array100. In some implementations, the transmitting devices302a-302nmay have a known location, either they are placed at specific known locations that are then transmitted to the rotating antenna array100, or in other implementations, the transmitting devices302a-302nmay be able to transmit relative to each other and another known location (not shown) and determine location of the transmitting devices302a-302nbased on the other known location (not shown) that is capable of transmitting to the transmitting devices302a-302n. In further implementations, the transmitting devices302a-302ncan ping each other and based on the receive location information can calculate relative positions and triangulate locations of one or more of the transmitting devices302a-302nbased on those calculations. As shown inFIG.1A, as the antennas102a-102nare rotated around the pivot point104, the antennas102a-102nmay receive location information from the one or more transmitting devices302a-302n. The location information received by the antennas102a-102ncorrelate to the respective position of the rotating antenna array100as shown with respect toFIG.4. The location information for the antennas102a-102nat each of the respective positions for each of the antennas102a-102nare then used to calculate a determined position of the rotating antenna array100. By calculating a plurality of location information positions for the antennas102a-102nthe rotating antenna array100can provide a more accurate determined position that accounts for various errors that are introduced into position determinations that do not use a rotating antenna array. For example, a position determination using a single antenna is prone to signal interruptions or signal delays that can cause errors to be introduced into the position determination. By capturing a plurality of location information positions using the rotating antenna array100, the position determination can account for any signal interruptions or signal delays between the one or more antennas102a-102nand the one or more transmitting devices302a-302n. As shown inFIG.1A, the antenna rotating boom108may be configured to rotate about the pivot point104and may hold or position one or more antennas102a-102n. In the example shown inFIG.1A, the antenna boom108includes three different arms that each hold an antenna102a-102cthat are equally spaced about the pivot point104. It should be understood that any number of antennas102a-102nmay be attached to the antenna rotating boom108and those antennas102a-102nmay be equally spaced or may be unequally spaced, such as different lengths of arms of the antenna rotating boom108allowing for different extension lengths for the antenna attachments holding the antennas102. In some implementations, the lengths of the arms of the antenna rotating boom108are not limited to the structure shown inFIGS.1A-IC, but may be extended to any length as needed. Additionally, as the length of the arms of the antenna rotating boom108are extended, the capacity for additional accuracy of the determined position using the rotating antenna array100is increased. In some implementations, the antenna rotating boom108may rotate about a single axis on a two-dimensional plane for capturing location information, while in further implementations, the antenna rotating boom108may allow for rotations about the pivot point104in three dimensions to capture a three-dimensional spread of location information for additional positional determinations. As shown inFIG.1A, the rotating antenna array100may use a motor106to cause the antenna rotating boom108to rotate about the pivot point104. In some implementations, the motor106may allow for a variability of speed and the speed of the motor106can be varied as needed to capture location information for position determinations. In some implementations, the motor106can be configured to cause the rotating antenna array100to rotate at a consistent speed and the antennas102may sample various location information periodically as the rotating antenna array100moves. In further implementations, the motor106can alternate or change the speeds and a position engine216can use the motor106speed at time of sampling to calculate positions of the antenna102. In further implementations, the antenna102can sample after the motor106moves through various stopping points402as shown with respect toFIG.4. In some implementations, the motor106may connect to a portable power supply such as batteries or a portable power source to allow for easy movement and positioning of the rotating antenna array100. In some implementations, as shown inFIG.1A, the gears that are rotated by the motor106may be exposed, while in further implementations, the motor106and other components may be enclosed within a housing (not shown) that protects the various components of the rotating antenna array100. In further implementations, other movement devices may be used to cause the antenna rotating boom108to rotate about the pivot point104, such as magnets or other movement devices. In some implementations, the rotating antenna array100may be configured to be mounted on other devices to determine a specific position of the other device. For example, in one implementation, the rotating antenna array100may be mounted on a layout projection device502as shown with respect toFIG.5. The specific position of the layout projection device502may then be determined using the rotating antenna array100and the layout projection device502may use the determined location during the operation of the layout projection device502. FIG.1Bshows a top-down view of the rotating antenna array100. As shown inFIG.1B, the antennas102a-102cmay be mounted on the ends of the antenna rotating boom108and allow for either clockwise and/or counterclockwise rotation about the pivot point104. As shown in the example inFIG.1B, the antennas102may be mounted using various mounting components to the ends of the antenna rotating boom108. In some implementations, the rotating antenna array100may be configured to easily attach various antenna102to the ends of the antenna rotating boom108for easy setup and/or installation of the rotating antenna array100.FIG.1Cshows a side view of the rotating antenna array100. As shown inFIG.1C, the antennas102may include the antenna receiver at a top of the antenna mounting apparatus110and additional components needed for the antenna102operation may be included on the antenna mounting apparatus110. InFIGS.1A-1Cand the remaining figures, a letter after a reference number, e.g., “102a,” represents a reference to the element having that particular reference number. A reference number in the text without a following letter, e.g., “102,” represents a general reference to instances of the element bearing that reference number. The rotating antenna array100may include one or more electronic information sources (not shown) that can be accessed by other devices. The information source(s) may be local and/or remote, and include one or more non-transitory computer-readable media, for storing, retrieving, updating, deleting, and/or otherwise manipulating data, such as blueprint documents, positional data, user settings, premises-related settings, etc. The rotating antenna array100may be communicatively coupled to the electronic information source via a communications bus, a computer network (e.g., wired and/or wireless network connection and corresponding interfaces, etc., (not shown)). In some embodiments, an electronic information source may be a computing device that includes a memory and a processor, for example a server, a laptop computer, a desktop computer, a tablet computer, a mobile telephone, a smartphone, a personal digital assistant (PDA), a mobile email device, a webcam, a user wearable computing device, or any other electronic device capable of accessing a network. The electronic information source may, in some cases, provide general graphics and multimedia processing for any type of application. In some embodiments, the electronic information source may include a display for viewing and/or inputting information on an application, such as blueprint documents, positional data, user settings, premises-related settings, etc. A computer network can be a conventional type, wired or wireless, and may have numerous different configurations including a star configuration, token ring configuration or other configurations. Furthermore, the network may include a local area network (LAN), a wide area network (WAN) (e.g., the Internet), and/or other interconnected data paths across which multiple devices may communicate. In some embodiments, the network may be a peer-to-peer network. The network may also be coupled to or include portions of a telecommunications network for sending data in a variety of different communication protocols. In some embodiments, the network may include Bluetooth communication networks or a cellular communications network for sending and receiving data including via short messaging service (SMS), multimedia messaging service (MMS), hypertext transfer protocol (HTTP), direct data connection, push notifications, WAP, email, etc. FIG.2illustrates a block diagram illustrating an example rotating antenna array100. The example rotating antenna array100may include a communication unit202, a processor204, a memory206, a storage system210, a location sensor212, an orientation sensor214, and/or a position engine216according to some examples. The components of the rotating antenna array100may be configured to capture location information and determine a position of the rotating antenna array100, as discussed elsewhere herein. The components of the rotating antenna array100are communicatively coupled by a bus and/or software communication mechanism224, which may represent an industry standard architecture (ISA), a peripheral component interconnect (PCI) bus, a universal serial bus (USB), or some other suitable architecture. The processor204may execute software instructions by performing various input/output, logical, and/or mathematical operations. The processor204may have various computing architectures to process data signals including, for example, a complex instruction set computer (CISC) architecture, a reduced instruction set computer (RISC) architecture, and/or an architecture implementing a combination of instruction sets. The processor204may be physical and/or virtual, and may include a single processing unit or a plurality of processing units and/or cores. In some embodiments, the processor204may be coupled to the memory206via the bus and/or software communication mechanism224to access data and instructions therefrom and store data therein. The bus and/or software communication mechanism224may couple the processor204to the other components of the computing device200including, for example, the memory206, the communication unit202, the position engine216, and the storage system210. It should be understood that other processors, operating systems, sensors, displays and physical configurations are also possible. The memory206may store and provide access to data for the other components of the rotating antenna array100. The memory206may be included in a single computing device or may be distributed among a plurality of computing devices as discussed elsewhere herein. In some embodiments, the memory206may store instructions and/or data that may be executed by the processor204. The instructions and/or data may include code for performing the techniques described herein. For example, in one embodiment, the memory206may store position engine216. The memory206is also capable of storing other instructions and data, including, for example, an operating system, hardware drivers, other software applications, databases, etc. The memory206may be coupled to the bus or software communication mechanism224for communication with the processor204and the other components of the layout device502. The memory206may include one or more non-transitory computer-usable (e.g., readable, writeable) devices, a static random access memory (SRAM) device, an embedded memory device, a discrete memory device (e.g., a PROM, FPROM, ROM), a hard disk drive, an optical disk drive (CD, DVD, Blu-ray™, etc.), which can be any tangible apparatus or device that can contain, store, communicate, or transport instructions, data, computer programs, software, code, routines, etc., for processing by or in connection with the processor204. In some embodiments, the memory206may include one or more of volatile memory and non-volatile memory. It should be understood that the memory206may be a single device or may include multiple types of devices and configurations. The communication unit202is hardware for receiving and transmitting data by linking the processor204to the network and other processing systems. The communication unit202may receive data, such as blueprint documents or other electronic information, from other electronic information source(s), and may provide the data and/or determined positions to the other components of the rotating antenna array100, for processing and/or storage. In some embodiments, the communication unit202may transmits data (e.g., positional data, settings, premises-related information, etc.) to other electronic information source(s) for processing and/or display. The communication unit202may include one or more wired and/or wireless interfaces. The communication unit202may provide standard connections to the network for distribution of files and/or media objects using standard network protocols, such as TCP/IP, HTTP, HTTPS and SMTP. In some embodiments, the communication unit202may include a port for direct physical connection to a client device (not shown) or to another communication channel. For example, the communication unit202may include an RJ45 port or similar port for wired communication with an electronic information source. In some embodiments, the communication unit202may include a wireless transceiver (not shown) for exchanging data with the electronic information source or any other communication channel using one or more wireless communication methods, such as IEEE 802.11, IEEE 802.16, Bluetooth® or another suitable wireless communication method. In some embodiments, the communication unit202may include a cellular communications transceiver for sending and receiving data over a cellular communications network such as via short messaging service (SMS), multimedia messaging service (MMS), hypertext transfer protocol (HTTP), direct data connection, push notification, WAP, e-mail or another suitable type of electronic communication. Other suitable variations for communicating data are also possible and contemplated. The storage system210is an electronic information source that includes a non-transitory memory that stores data, such as the data discussed elsewhere herein. The storage system210may be local and/or remote. The storage system210may be a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, flash memory or some other memory device. In some embodiments, the storage system210also may include a non-volatile memory or similar permanent storage device and media including a hard disk drive, a solid state drive, a floppy disk drive, or some other mass storage device for storing information on a more permanent basis. In the illustrated embodiment, the storage system210is communicatively coupled to the bus or software communication mechanism224. The location sensor212may include one or more sensors that capture attribute(s) of an external environment of the layout device502and determine a physical location of the rotating antenna array100based on the attribute(s). The location sensor212may include hardware and/or software capable of determining the physical location. The location sensor212may be configured to provide the location data and/or physical location to the rotating antenna array100, and/or may store the data for access and/or retrieval thereby. In some implementations, the location sensors212may include the one or more antenna102a-102nas described elsewhere herein. The one or more antenna102a-102nmay be configured to receive and/or transmit various signals to other transmitting devices302and may be able to determine lengths of time for the various signals, signal strengths, directionality, etc. of the various signals being transmitted and/or received between the antennas102a-102nand the transmitting devices302. In some embodiments, the location sensor212may include one or more sensors such as a Global Positioning System (GPS) sensor, Global Navigational Satellite System (GLONASS) sensor, Galileo system sensor, a BeiDou sensor, an IRNSS sensor, a QZSS sensor, a LIDAR sensor, an ultra-wideband sensor, a radio-positioning sensor, and/or a Real Time Location System (RTLS) sensor. An RTLS sensor may be a used to automatically identify and track the locations of objects/people in real time. An RTLS may use active RFID, active RFID-IR, optical locating, infrared, low-frequency signpost identification, semi-active RFID, passive RFID RTLS locating via steerable phased array antennae, radio beacons, ultrasound identification, ultrasonic ranging, wide-over-narrow band, wireless local area network, Bluetooth, clustering in noisy ambience, and/or bivalent systems to track the locations. In some embodiments, the location sensor212may be embodied by the communication unit202, and positional data may be determined by triangulating position between radio communication nodes (e.g., other wireless transceivers, triangulation data determined by a third-party (e.g., wireless carrier), etc. Any other suitable variations for determining location are also possible and contemplated. In some embodiments, the location sensor212may be configured to collect location data based upon a request to collect location data. In further embodiments, the location sensor212may collect location data continuously or at regular intervals. In some embodiments, the rotating antenna array100may determine a physical location of the layout device502to within a precise threshold, such as 3/16 of an inch, in order to provide precise accuracy of the layout device502and the projection. The orientation sensor214may include one or more sensors that collect orientation data and determine an orientation (e.g., pitch, azimuth, yaw, roll, etc.) of the rotating antenna array100. The orientation sensor214may be hardware and/or software capable of determining the orientation of the rotating antenna array100. The orientation sensors214may be configured to provide the orientation data to the rotating antenna array100and/or the position engine216. In some embodiments, the orientation sensor214may include one or more accelerometers, gyroscopes, or other devices capable of detecting orientation. In some embodiments, the orientation sensor214may be configured to determine yaw, azimuth, pitch, and/or roll. In some embodiments, the orientation sensor214may be configured to collect orientation data based upon a request to collect orientation data. In further embodiments, the orientation sensor214may collect orientation data continuously or at regular intervals. In some embodiments, the orientation sensor214may determine the orientation of the rotating antenna array100to be situated within a precise threshold, such as within 0.1, 0.5, 1, 1.5, and/or 2+ degrees of accuracy, in order to provide precise accuracy of the rotating antenna array100for the determined positions. The position engine216may include computer logic to provide the functionality for determining a position of the rotating antenna array100using the collected location information from the one or more antennas102a-102nof the rotating antenna array100and provide the determined position to other devices. The computer logic may be implemented in software, hardware, and/or a combination of the foregoing. The position engine216may be configured to receive a plurality of location information that may include exact positions of each antenna102on the antenna rotating boom108, a speed of rotation, a received signal from a transmitting device for each of the antenna102at specific time intervals and/or position stops. The position engine216may then be configured to use the location information to calculate a specific determined position of the rotating antenna array100. FIG.3shows a system300with an example rotating antenna array100and example transmitting devices302a-302n. As shown, the transmitting devices302a-302nmay be dispersed throughout an area and may transmit and/or receive signals to and/or from the antennas102a-102n. As the rotating antenna array100rotates the antenna102to different positions around the pivot point104, the antenna102receive position information associated with the signals for the transmitting devices302a-302n. In some implementations, the accuracy of the location determinations using the rotating antenna array100can be up to 1 mm of accuracy. In some implementations, anything that might interrupt a transmitting signal from the transmitting device302, such as interference of the signal, may create errors in the accuracy. The rotating antenna array100described herein can account for those errors in accuracy, where if strange data or a deviation that is unexpected is received from one of the transmitting devices302, that data from that transmitting device302(or beacon) can be ignored and other data from the other transmitting devices302is used instead. In some implementations, the number of transmitting devices302a-302nis sufficient that if data errors are received from one or more transmitting devices302, the remaining transmitting devices302a-302nwill be sufficient to continue providing accurate location data and/or location information. The potential data errors can be ignored in some implementations where that data is dropped. In further implementations, machine learning algorithms can be used to predict what the data error is and account for what the location data should be based on various inputs, such as historical location data, the other transmitting devices302relative to the transmitting devices302providing errors, etc. The machine learning algorithms can detect when data is potentially coming in with errors and account for these errors when doing the location determinations. In some implementations, the antennas102are spaced along a circular trajectory at a predetermined distance that may be but are not necessarily equal distance and rotate about the center of that circular trajectory along the pivot point104. In some implementations, the further apart the rotating antenna102are, the more accurate the location determination may be. Based on design constraints for various uses, different antenna distances are contemplated. For example, in a portable use case, the rotating antenna array100may be designed to fit within a cover/case and be anywhere from 10-15 cm apart based on cover/case design. In some implementations, where the desire is to improve a GPS/Radar/Sonar system, the rotating antenna adds in a three-dimensional picture. For example, in this implementation, the rotating antenna array could be installed in a mast on a ship and the antenna array could be much larger distances apart, such as 30 feet wide, etc. FIG.4depicts a graphical representation of a top-down view of a rotating antenna array100with various stopping positions402forming a concentric circle along a single plane. As shown inFIG.4, the rotating antenna array100rotates in a clockwise fashion and stops at various stopping points402allowing each of the antennas102to capture location data at the stopping point. In an example with three rotating antennas102a-102c, at a first stopping point, a first antenna102amay capture location information at stopping positions402awhile simultaneously a second antenna102bmay capture location information at stopping point402dand a third antenna102cmay capture location information at stopping point402h. Using the stopping point antenna positions, each of the antennas102can capture separate location information represented as each of the antennas102stopping point antenna positions that can be used to calculate a stopping point center position at each of the stopping points402. The rotating antenna array100may then rotate the antennas to a second stopping position and the antennas120a-102cmay simultaneously capture location information at the new respective stopping points. The results of these stopping points in the rotation creates location data captured at specific stopping points at specific times then rotated to new stopping points capturing location data at new times. In some implementations, all stopping positions are an equal distance from each other along the circular trajectory of the antenna102or receiver and the number of positions is a divisor of the circular trajectory while in further implementations, the stopping points may be unequal distances and captured at different intervals. The position engine216may then compute the xoposition at the center of a single antenna102given the xnmeasurements at each stop/measurement point with m being the total number of measurement points as shown below: ∑n=1mxnm=x0 The number of increments is a divisor into the whole of the circle. The position engine216then repeats the calculation process for each of the antennas102to be determined. The solution yields the position of the center point. In some implementations, the center position represent a center point. To further increase the accuracy of the position of the center pivot point the calculations are repeated for each component of each antenna/receiver to give an average of the collective result. The technology for measuring position using the given antennas can be but is not limited to GPS, Ultrawide Band, Bluetooth, RFID, Radio Beacon, Sonar, Radar, or WiFi. In other implementations, the antenna102can receive signals from ultrasonic or other acoustic transmitters. The position engine216may then compute a cumulative average xoposition (determined position) for three antenna102a-012crepresented by Anxin a rotating antenna array100having m measurement points shown below and divided by the amount of antenna (3 in this example): ∑n=1mA1⁢xm+∑n=1mA2⁢xm+∑n=1mA3⁢xm3=x0 The position engine216may then use the cumulative average position to identify a determined position, such as a determined position of the device, e.g., a device position. In further implementations, based on where the rotating antenna array100is attached to another device, the position engine216can calculate the position of the other device (e.g., another device position) by extrapolating based on the mounting data from the determined position. FIG.5shows a graphical representation of an example system100including a layout device502projecting a layout508using the positional information from the rotating antenna array100. As shown, the layout device502may receive the determined rotating antenna array position and may use that location to accurately project a layout508. The layout device502may also include a projector504capable of projecting a representation of the layout508. In some embodiments, the projector504may be made up of one or more mirrors and one or more emitters, as discussed in further detail below. As depicted, the layout device502is arranged to project the representation of a layout508on a physical surface506. The physical surface506may be any suitable surface of a premise. For example, the physical surface506may be a floor, wall, ceiling, etc., of a work site. In some embodiments, the representation of the layout508may be projected within a projection area. The projection area may be based on an area of a blueprint, and the layout device502may project the representation of the layout508within the projection area on the physical surface506. In some embodiments, the representation of the layout508may include one or more objects from a blueprint or other design document (referred to simply as blueprint). In some embodiments, the blueprint is an electronic document that describes the building specifications for the premises, and includes pictorial and textual information of the elements comprising the premises, such as wall locations, pipe locations, electrical conduit locations, footing locations, appliance locations, walkways, landscaping elements, etc., for the indoor and/or outdoor area(s) of the premises, such as the various floors, yards, lots, etc., of the premises. The projected objects may include any of these elements included in the blueprint. In a non-limiting example, the layout device502may be used on a construction site of a building. A worker may upload a blueprint to the layout device502and the layout device502may be positioned above a portion of the work site that will be measured based on the layout included within the blueprint. In some embodiments, a worker may use an application executed on a computer, such as a mobile device, to configure the projection. For instance, inputs are provided via the application specifying the blueprint, configure the projection area of the layout device502, etc. Continuing this example, the layout device502may receive positional information from the rotatable antenna array100, which in some implementations may be mounted on the layout device502and using the determined location, the layout device502may identify a portion of the blueprint that corresponds to the physical location of the layout device502and identify objects represented in the blueprint in relation to the physical location of the layout device502. Using the positional information from the rotatable antenna array100, the layout device502may also calculate positions for the mirrors of the projector of the layout device502in order to project a representation of the layout508on the physical surface506. When determining the positions of the mirrors, the layout device502may, in some cases, also make any orientational adjustments, such adjustments to address skewing, key stoning, etc., of the projection. This allows for the workers on site to quickly setup the layout device502without having to manually position and level the layout device502before projecting the representation of the layout508, which advantageously increases accuracy and speed while reducing error. Further, the projected layout508on the physical surface506can provide significantly more detail and resolution for the components to be built than existing solutions (e.g., pencil markings, chalk markings, spray paint, etc.), and allows the workers to visualize the layout, accurately place, and build structural components depicted in the representation of the layout508, etc. In a further example, a worker can conveniently and quickly move the layout device502to different locations on the premises, and the layout device502may automatically determine the different locations using the rotatable antenna array100and automatically provide the layouts related to the different locations. In some embodiments, the layout device502may include and/or be connected to a moveable device, such as a robot or drone, and the layout device502may be configured to automatically, or based on instruction, move to various locations within the premises. For instance, the layout device502may be configured to follow a worker or follow a predetermined route to project a layout508as the layout device502moves. In some embodiments, the layout device502may provide representations of the layout508using certain visually distinguishable indicators, such as different colors, shapes, line-widths and/or types, etc., to differentiate different components from one another, such as walls from footings, etc. In further embodiments, the layout device502may be programmed to include visual representations (lines, arrows, measurements, etc.) of dimensions for the components in the layout508representation that it projects. For instances, dimensions of walls, walkways, appliances, and other objects may be determined from the blueprint and projected for visualization by the workers. FIG.6is a flow chart600of a method of determining a position of a rotating antenna array100. At602, the rotating antenna array100is arranged at a first stopping position402with antenna102. The antenna102capture first location data at the first stopping position402afrom the transmitting devices302. The first location data may be used to determine the respective locations of each of the antenna102at the first stopping position402a. At604, the rotating antenna array100may then cause the motor106to move the antenna102to the second stopping position402b. The second stopping position402bmay be a predetermined by the position engine216and the position engine216may send signals causing the motor actuators to move the rotatable antenna boom108to the second stopping position402b. At606, the antenna102may capture second location data at the second stopping position402bfrom the transmitting devices302. The second location data may be used to determine the respective locations of each of the antenna102at the second stopping position402b. At608, the rotating antenna array100may then cause the motor106to move the antenna102to the third stopping position402c. The third stopping position402cmay be a predetermined by the position engine216and the position engine216may send signals causing the motor actuators to move the rotatable antenna boom108to the third stopping position402c. At610, the antenna102may capture third location data at the third stopping position402cfrom the transmitting devices302. The third location data may be used to determine the respective locations of each of the antenna102at the third stopping position402c. It should be understood that any number of stopping positions402may be employed by the location engine216and that the rotating antenna array100is not limited to a specific number of stopping positions402, antenna102, or transmitting devices302. At612, the location engine216may determine a first center point at the first stopping position402a, a second center point at the second stopping position402b, and a third center point at the third stopping position402cusing the first location data, the second location data, and the third location data respectively. In some implementation, the location engine216may analyze the first, second, and third location data to identify any errors or anomalies in the location data to ignore. The location engine216may use various machine learning algorithms to identify information that does not fall within expected location data and ignore those anomalies as errors. The location engine216may calculate the various center points as described elsewhere herein. At614, the location engine216may then determine a rotating antenna array position using the determined first center point, second center point, and third center point as described elsewhere herein. It should be understood that the first center point, the second center point, and the third center point are merely used as examples and any number of center points can be calculated based on the collected location information. At616, the location engine216may then provide the determined rotating antenna array position to another device for additional use. In some embodiments, the technology described herein may be used for alternative solutions that incorporate projecting a layout. For example, the layout device502may be used for hanging pictures on a wall. A user may take a picture of the back of a picture frame and upload the picture to the layout device502and the layout device502may project on the wall the location of the picture frame as well as where the holes for hanging the picture should be located based on the back of the picture frame image. In further examples, the layout device502may be used to project the location of can lights or other hardware used in construction of buildings. Specifically, the layout device502may project the location of the light or other hardware, as well as projecting where mounting components should be placed. In further embodiments related to construction, the layout device502may be used to project a leveling of a surface. The layout device502may highlight or otherwise indicate areas that are not proper grade or height and track in real time the grading of the area, as well as providing indications of the level of a projectable area in real time. In further embodiments, the layout device502may be used to project routes, such as infrared routes. A route could be determined and uploaded to the layout device502and the layout device502may project the route onto a projectable surface. In further embodiments, the layout device502may be configured to follow the route and update the projected route as the layout device502moves along the route. In further embodiments, the layout device502may be used to project a key or token of a specific layout to unlock a location. For example, a special image specific to a function could be projected to a receiver to unlock a door. In further embodiments, the projection could be projected to an object in motion and the receiver located on the object in motion may be configured to detect the special image projected by the layout device502. In further embodiments, the layout device502may be used as a visual inspection tool for manufacturing purposes by a human operator or a smart vision camera. The layout device502may project a predefined representation and the products being analyzed may be examined in comparison to the representation to determine if the products meet quality control criteria. Technology for determining a position using a rotating antenna array100has been described. In the above description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the techniques introduced above. It will be apparent, however, to one skilled in the art that the techniques can be practiced without these specific details. In other instances, structures and devices are shown in block diagram form in order to avoid obscuring the description and for ease of understanding. For example, the techniques are described in one embodiment above primarily with reference to software and particular hardware. However, the present invention applies to any type of computing system that can receive data and commands, and present information as part of any peripheral devices providing services. Reference in the specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment. Some portions of the detailed descriptions described above are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are, in some circumstances, used by those skilled in the data processing arts to convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as “processing”, “computing”, “calculating”, “determining”, “displaying”, or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices. The techniques also relate to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, flash memories including USB keys with non-volatile memory or any type of media suitable for storing electronic instructions, each coupled to a computer system bus or software communication mechanism. Some embodiments can take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment containing both hardware and software elements. One embodiment is implemented in software, which includes but is not limited to firmware, resident software, microcode, etc. Furthermore, some embodiments can take the form of a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. A data processing system suitable for storing and/or executing program code can include at least one processor coupled directly or indirectly to memory elements through a system bus or software communication mechanism. The memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution. Input/output or I/O devices (including but not limited to keyboards, displays, pointing devices, etc.) can be coupled to the system either directly or through intervening I/O controllers. Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters. Finally, the algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description below. In addition, the techniques are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the various embodiments as described herein. The foregoing description of the embodiments has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the specification to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. It is intended that the scope of the embodiments be limited not by this detailed description, but rather by the claims of this application. As will be understood by those familiar with the art, the examples may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. Likewise, the particular naming and division of the modules, routines, features, attributes, methodologies and other aspects are not mandatory or significant, and the mechanisms that implement the description or its features may have different names, divisions and/or formats. Furthermore, as will be apparent to one of ordinary skill in the relevant art, the modules, routines, features, attributes, methodologies and other aspects of the specification can be implemented as software, hardware, firmware or any combination of the three. Also, wherever a component, an example of which is a module, of the specification is implemented as software, the component can be implemented as a standalone program, as part of a larger program, as a plurality of separate programs, as a statically or dynamically linked library, as a kernel loadable module, as a device driver, and/or in every and any other way known now or in the future to those of ordinary skill in the art of computer programming. Additionally, the specification is in no way limited to embodiment in any specific programming language, or for any specific operating system or environment. Accordingly, the disclosure is intended to be illustrative, but not limiting, of the scope of the specification, which is set forth in the following claims.
47,676
11860291
DESCRIPTION OF EMBODIMENTS The following Description of Embodiments is merely provided by way of example and not of limitation. Furthermore, there is no intention to be bound by any expressed or implied theory presented in the preceding background or brief summary, or in the following detailed description. Reference will now be made in detail to various embodiments of the subject matter, examples of which are illustrated in the accompanying drawings. While various embodiments are discussed herein, it will be understood that they are not intended to limit to these embodiments. On the contrary, the presented embodiments are intended to cover alternatives, modifications and equivalents, which may be included within the spirit and scope the various embodiments as defined by the appended claims. Furthermore, in this Description of Embodiments, numerous specific details are set forth in order to provide a thorough understanding of embodiments of the present subject matter. However, embodiments may be practiced without these specific details. In other instances, well known methods, procedures, components, and circuits have not been described in detail as not to unnecessarily obscure aspects of the described embodiments. Notation and Nomenclature Some portions of the detailed descriptions which follow are presented in terms of procedures, logic blocks, processing and other symbolic representations of operations on data within an electrical circuit. These descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. In the present application, a procedure, logic block, process, or the like, is conceived to be one or more self-consistent procedures or instructions leading to a desired result. The procedures are those requiring physical manipulations of physical quantities. Usually, although not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated in an electronic device. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussions, it is appreciated that throughout the description of embodiments, discussions utilizing terms such as “receiving,” “determining,” “adjusting,” “sorting,” “applying,” “displaying,” “detecting,” “initiating,” “communicating,” “calibrating,” “generating,” or the like, refer to the actions and processes of an electronic device such as: a processor, a memory, a computing system, a mobile electronic device, or the like, or a combination thereof. The electronic device manipulates and transforms data represented as physical (electronic and/or magnetic) quantities within the electronic device's registers and memories into other data similarly represented as physical quantities within the electronic device's memories or registers or other such information storage, transmission, processing, or display components. Embodiments described herein may be discussed in the general context of processor-executable instructions residing on some form of non-transitory processor-readable medium, such as program modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types. The functionality of the program modules may be combined or distributed as desired in various embodiments. In the figures, a single block may be described as performing a function or functions; however, in actual practice, the function or functions performed by that block may be performed in a single component or across multiple components, and/or may be performed using hardware, using software, or using a combination of hardware and software. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, logic, circuits, and steps have been described generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure. Also, the example fingerprint sensing system and/or mobile electronic device described herein may include components other than those shown, including well-known components. Various techniques described herein may be implemented in hardware, software, firmware, or any combination thereof, unless specifically described as being implemented in a specific manner. Any features described as modules or components may also be implemented together in an integrated logic device or separately as discrete but interoperable logic devices. If implemented in software, the techniques may be realized at least in part by a non-transitory processor-readable storage medium comprising instructions that, when executed, perform one or more of the methods described herein. The non-transitory processor-readable data storage medium may form part of a computer program product, which may include packaging materials. The non-transitory processor-readable storage medium may comprise random access memory (RAM) such as synchronous dynamic random access memory (SDRAM), read only memory (ROM), non-volatile random access memory (NVRAM), electrically erasable programmable read-only memory (EEPROM), FLASH memory, other known storage media, and the like. The techniques additionally, or alternatively, may be realized at least in part by a processor-readable communication medium that carries or communicates code in the form of instructions or data structures and that can be accessed, read, and/or executed by a computer or other processor. Various embodiments described herein may be executed by one or more processors, host processor(s) or core(s) thereof, digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), application specific instruction set processors (ASIPs), field programmable gate arrays (FPGAs), a programmable logic controller (PLC), a complex programmable logic device (CPLD), a discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein, or other equivalent integrated or discrete logic circuitry. The term “processor,” as used herein may refer to any of the foregoing structures or any other structure suitable for implementation of the techniques described herein. As it employed in the subject specification, the term “processor” can refer to substantially any computing processing unit or device comprising, but not limited to comprising, single-core processors; single-processors with software multithread execution capability; multi-core processors; multi-core processors with software multithread execution capability; multi-core processors with hardware multithread technology; parallel platforms; and parallel platforms with distributed shared memory. Moreover, processors can exploit nano-scale architectures such as, but not limited to, molecular and quantum-dot based transistors, switches and gates, in order to optimize space usage or enhance performance of user equipment. A processor may also be implemented as a combination of computing processing units. In addition, in some aspects, the functionality described herein may be provided within dedicated software modules or hardware modules configured as described herein. Also, the techniques could be fully implemented in one or more circuits or logic elements. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of an SPU/MPU and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with an SPU core, MPU core, or any other such configuration. Overview of Discussion Discussion begins with a description of an example components of a system for determining a location of a device. Example environments in which a location of a mobile electronic device is determined are then described. Example location determination of a mobile electronic device using beacon devices are then described. In various embodiments, methods for determining a location of an electronic device are provided. A plurality of beacon signals are received from a plurality of beacon devices at the electronic device, wherein each beacon signal of the plurality of beacon signals includes an identity of a beacon device transmitting a respective beacon signal, and each beacon device of the plurality of beacon devices has a known location. A received signal strength for each beacon signal of the plurality of beacon signals is measured. A distance of the electronic device from each beacon device for which the plurality of beacon signals is received is determined, wherein the distance of the electronic device from a beacon device is based at least in part on the received signal strength of the beacon signal transmitted by the beacon device. A location of the electronic device is determined based at least on part on the distance of the electronic device from each beacon device for which the plurality of beacon signals is received. In general, a beacon is a simple device or gadget that is placed in an environment that may be stationary and is configured to broadcast its identification (ID). The identification may or may not include a location and the beacon may or may not generate data that is broadcast. In one embodiment, the beacon only wirelessly broadcasts an identification signal. In one embodiment, the beacon may be mobile and may be tracked by the present technology in the environment. In accordance with various embodiments, a mobile electronic device such as a computer system, laptop, tablet, smart phone, server, or other electronic device, includes a receiver that can listen for the broadcasts from the beacon. For example, the mobile electronic device may include a Bluetooth receiver that is able to receive a signal from a beacon that is broadcasting an ID over Bluetooth. In one embodiment, the mobile electronic device is also able to determine the power level of the beacon based on the broadcast from the beacon. The beacon may or may not specifically broadcast its power level with the ID. In one embodiment, the mobile electronic device is able to determine the power level of the beacon based on a pre-programmed knowledge of the beacon including the model type of the beacon. In one embodiment, the mobile electronic device is able to determine the power level of the beacon based on the signal strength of the broadcast from the beacon. In other embodiments, the mobile electronic device is able to communicate the received signals to a remote computer system for processing. In one embodiment, the mobile electronic device is able to determine the distance between the mobile electronic device and the beacon based on the received signal from the beacon. In one embodiment, the mobile electronic device is able to determine the distance between the mobile electronic device and a plurality of beacons. The distances between the mobile electronic device and the plurality of beacons can be used to determine a location of the mobile electronic device. In various embodiments, the location of the mobile electronic device can be compared to a map of an environment for determining whether to initiate actions depending on a location of the mobile electronic device. In another embodiment, a remote computer system is able to determine the distance between the mobile electronic device and the beacon based on the received signal from the beacon. In one embodiment, the remote computer system is able to determine the distance between the mobile electronic device and a plurality of beacons. The distances between the mobile electronic device and the plurality of beacons can be used to determine a location of the mobile electronic device. In various embodiments, the location of the mobile electronic device can be compared to a map of an environment for determining whether to initiate actions depending on a location of the mobile electronic device. Example Components of a System for Determining a Location of a Device Turning now to the figures,FIG.1is a block diagram of an example beacon device100upon which embodiments described herein may be implemented. Beacon device100is an electronic device that is capable of broadcasting its identifier such that nearby electronic devices (e.g., mobile electronic devices) are able to receive the identifier. As will be appreciated, beacon device100can be implemented using any type of electronic device capable of communicating its identifier. In accordance with various embodiments, beacon device100includes the minimum componentry required to provide identifier transmission functionality, and it should be appreciated that beacon device100may include other components and may provide other functionality. As illustrated, beacon device100includes processor110, memory120and transmitter130. Processor110may be any type of processor as described above. Memory120may include computer usable volatile memory and/or computer usable non-volatile memory for storing information and instructions for processor110. Transmitter130is configured to emit a signal including an identifier of beacon device100. In one embodiment, the identifier is a unique identifier, e.g., a universally unique identifier (UUID). In accordance with various embodiments, transmitter130can be implemented using any type of wireless technology for communicating data, such as Bluetooth, Bluetooth Low Energy (BLE), Wi-Fi, Z-Wave, ZigBee, or any other type of wireless technology, including proprietary wireless technology. In one embodiment, beacon device100includes optional receiver140. Beacon device100may communicate with a network and receive data via receiver140. For example, a remote management system can communicate configuration data to beacon device100via receiver140. In another example, a remote management system may poll beacon device100for information (e.g., power lever or transmission signal strength), which can then be communicated to the remote management system via transmitter130. In accordance with various embodiments, beacon device100may be a Bluetooth device in an environment (e.g., BLE). A beacon device100implementing BLE may regularly go to sleep and wake up only using power when necessary and may have a battery life that lasts in terms of years. It should be appreciated that the described embodiments may operate in indoor or outdoor environments or a combination of the two. The described embodiments are different than geofencing in that geofencing requires location information based on an external system such as Global Positioning System (GPS). For example, beacons in geofencing may broadcast their GPS location. Therefore, geofencing may be difficult or impossible to use indoors where the signal from satellites may be blocked by the structure. The described embodiments do not rely on, nor do they require, GPS location information. The described embodiments determine the location of a mobile electronic device based on signals received from beacon devices100. Embodiments described herein provide for a mobile electronic device to receive the signal including the identifier transmitted by beacon device100. In accordance with various embodiments, a mobile electronic device is configured to receive signals including the identifier from multiple beacon devices100. The received signals are processed to determine a location of the mobile electronic device. As will be described, it should be appreciated that the processing of the received signals may be performed at the mobile electronic device, at a remote computer system (e.g., a server or a remove management system), or at any combination of the mobile electronic device and a remote computer system. FIG.2illustrates an example electronic device upon which embodiments described herein may be implemented. Electronic device200is capable of receiving signals transmitted by beacon device100, and communicating the received signals to a remote computer system (e.g., a server or a remove management system). In accordance with the illustrated embodiment, electronic device200includes the minimum componentry required to receive a wireless signal from a beacon device100and to communicate the contents of the wireless signal (e.g., the identifier) and information related to the receipt of the wireless signal (e.g., the received signal strength) to a remote computer system, and it should be appreciated that electronic device200may include other components and may provide other functionality. In one embodiment, electronic device200is a simple device or gadget that is able to receive and transmit wireless signals. For example, electronic device200may be included within a keychain, a key fob or an identification card. In some embodiments, electronic device200is operable to perform some processing on the received signals prior to communicating information related to the signals to a remote computer system. It should be appreciated that electronic device200may perform any amount of processing, or no processing, of the received signals prior to communicating information related to the signals to a remote computer system. For example, electronic device200may operate in conjunction with electronic device300ofFIG.3to determine a location of electronic device200. As illustrated, electronic device200includes processor210, memory220, transmitter230and receiver240. Processor210may be any type of processor as described above. Memory220may include computer usable volatile memory and/or computer usable non-volatile memory for storing information and instructions for processor210. Receiver240is configured to receive a signal from a beacon (e.g., beacon device100) including an identifier of the transmitting beacon. In one embodiment, the identifier is UUID. In accordance with various embodiments, receiver240can be implemented using any type of wireless technology for communicating data, such as Bluetooth, BLE, Wi-Fi, Z-Wave, ZigBee, or any other type of wireless technology, including proprietary wireless technology. Receiver240is also operable to measure a signal strength of the received signal, where the signal strength is an indication of the power level of the signal being received at receiver240. The signal strength is impacted by the distance between receiver240and an emitting beacon (e.g., the signal strength weakens as the distance grows). The signal strength may also be impacted by the power source of the emitting beacon, the particular hardware of the beacon, and the manufacturer of the beacon. In one embodiment, the measure of signal strength is indicated as a received signal strength indicator (RSSI). In another embodiment, the measure of signal strength is indicated as the received channel power indicator (RCPI). In one embodiment, electronic device200includes transmitter230. Electronic device200may communicate with a network by transmitting data via transmitter230. For example, electronic device200may communicate information related to the received signals (e.g., identifier and signal strength) to a remote computer system (e.g., a server or a remove management system) for further processing. FIG.3illustrates another example electronic device300upon which embodiments of the present invention can be implemented.FIG.3illustrates one example of a type of electronic device300(e.g., a computer system) that can be used in accordance with or to implement various embodiments which are discussed herein. As described above, it should be appreciated that the processing of the received signals may be performed at the mobile electronic device, at a remote computer system (e.g., a server or a remove management system), or at any combination of the mobile electronic device and a remote computer system. In one embodiment, the mobile electronic device that receives the signals from the beacon devices is implemented as electronic device300. In one embodiment, a remote computer system in communication with the mobile electronic device is implemented as electronic device300. It is appreciated that electronic device300ofFIG.3is only an example and that embodiments as described herein can operate on or within a number of different computer systems including, but not limited to, general purpose networked computer systems, embedded computer systems, mobile electronic devices, smart phones, server devices, client devices, various intermediate devices/nodes, stand alone computer systems, media centers, handheld computer systems, multi-media devices, and the like. In some embodiments, electronic device300ofFIG.3is well adapted to having peripheral tangible computer-readable storage media302such as, for example, an electronic flash memory data storage device, a floppy disc, a compact disc, digital versatile disc, other disc based storage, universal serial bus “thumb” drive, removable memory card, and the like coupled thereto. The tangible computer-readable storage media is non-transitory in nature. Electronic device300ofFIG.3includes an address/data bus304for communicating information, and a processor306A coupled with bus304for processing information and instructions. As depicted inFIG.3, electronic device300is also well suited to a multi-processor environment in which a plurality of processors306A,306B, and306C are present. Conversely, electronic device300is also well suited to having a single processor such as, for example, processor306A. Processors306A,306B, and306C may be any of various types of microprocessors. Electronic device300also includes data storage features such as a computer usable volatile memory308, e.g., random access memory (RAM), coupled with bus304for storing information and instructions for processors306A,306B, and306C. Electronic device300also includes computer usable non-volatile memory310, e.g., read only memory (ROM), coupled with bus304for storing static information and instructions for processors306A,306B, and306C. Also present in electronic device300is a data storage unit312(e.g., a magnetic or optical disc and disc drive) coupled with bus304for storing information and instructions. Electronic device300also includes an alphanumeric input device314including alphanumeric and function keys coupled with bus304for communicating information and command selections to processor306A or processors306A,306B, and306C. Electronic device300also includes an cursor control device316coupled with bus304for communicating user input information and command selections to processor306A or processors306A,306B, and306C. In one embodiment, electronic device300also includes a display device318coupled with bus304for displaying information. Referring still toFIG.3, display device318ofFIG.3may be a liquid crystal device (LCD), light emitting diode display (LED) device, cathode ray tube (CRT), plasma display device, a touch screen device, or other display device suitable for creating graphic images and alphanumeric characters recognizable to a user. Cursor control device316allows the computer user to dynamically signal the movement of a visible symbol (cursor) on a display screen of display device318and indicate user selections of selectable items displayed on display device318. Many implementations of cursor control device316are known in the art including a trackball, mouse, touch pad, touch screen, joystick or special keys on alphanumeric input device314capable of signaling movement of a given direction or manner of displacement. Alternatively, it will be appreciated that a cursor can be directed and/or activated via input from alphanumeric input device314using special keys and key sequence commands. Electronic device300is also well suited to having a cursor directed by other means such as, for example, voice commands. In various embodiments, alphanumeric input device314, cursor control device316, and display device318, or any combination thereof (e.g., user interface selection devices), may collectively operate to provide a graphical user interface (GUI)330under the direction of a processor (e.g., processor306A or processors306A,306B, and306C). GUI330allows user to interact with electronic device300through graphical representations presented on display device318by interacting with alphanumeric input device314and/or cursor control device316. Electronic device300also includes an I/O device320for coupling electronic device300with external entities. For example, in one embodiment, I/O device320is a modem for enabling wired or wireless communications between electronic device300and an external network such as, but not limited to, the Internet. In some embodiments. I/O device includes a receiver configured to receive a signal from a beacon (e.g., beacon device100) including an identifier of the transmitting beacon. In one embodiment, the identifier is UUID. In accordance with various embodiments, I/O device320can be implemented using any type of wireless technology for communicating data, such as Bluetooth, BLE, Wi-Fi, Z-Wave, ZigBee, or any other type of wireless technology, including proprietary wireless technology. I/O device320is also operable to measure a signal strength of the received signal, where the signal strength is an indication of the power level of the signal being received at I/O device320. The signal strength is impacted by the distance between I/O device320and an emitting beacon (e.g., the signal strength weakens as the distance grows). The signal strength may also be impacted by the power source of the emitting beacon, the particular hardware of the beacon, and the manufacturer of the beacon. In one embodiment, the measure of signal strength is indicated as a received signal strength indicator (RSSI). In another embodiment, the measure of signal strength is indicated as the received channel power indicator (RCPI). In one embodiment, I/O device320includes a transmitter. Electronic device300may communicate with a network by transmitting data via I/O device320. Electronic device300may also communicate with a mobile electronic device (e.g., electronic device200) or another electronic device300. For example, electronic device300may communicate information related to signals received from a beacon device (e.g., identifier and signal strength) to a remote computer system (e.g., another electronic device300) for further processing. Referring still toFIG.3, various other components are depicted for electronic device300. Specifically, when present, an operating system322, applications324, modules326, and data328are shown as typically residing in one or some combination of computer usable volatile memory308(e.g., RAM), computer usable non-volatile memory310(e.g., ROM), and data storage unit312. In some embodiments, all or portions of various embodiments described herein are stored, for example, as an application324and/or module326in memory locations within RAM308, computer-readable storage media within data storage unit312, peripheral computer-readable storage media302, and/or other tangible computer-readable storage media. Example Environments in which a Location of a Mobile Electronic Device is Determined FIG.4Aillustrates an example environment400in which a location of a mobile electronic device is determined, according to some embodiments. Environment400includes a plurality of beacon devices410a-cand a mobile electronic device420. In accordance with some embodiments, beacon devices410a-care implemented as beacon device100ofFIG.1and mobile electronic device420is implemented as electronic device300ofFIG.3. As illustrated, three beacon devices410a-care placed within an environment at known locations. In one embodiment, the beacon devices410a-care stationary and do not move. In another embodiment, at least one of beacon devices410a-cis mobile, but has a known location during transmission of their identifier. Mobile electronic device420receives beacon signals412a-cfrom beacons410a-cwhen mobile electronic device420moves into range for receiving the signals from a respective beacon device410a-c. Beacon signal412a-cincludes an identifier of beacon device410a-cthat is transmitting a respective beacon signal412a-c. Mobile electronic device420determines a distance of each beacon device410a-cfrom mobile electronic device420. In one embodiment, the distance is determined based in part on the received signal strength of beacon signals412a-c. In one embodiment, beacon devices410a-cinclude associated calibration factors. The distance of mobile electronic device420to each beacon device410a-ccan be adjusted according to the calibration factor. Mobile electronic device420determines a location of the mobile electronic device420based at least on part on the distance of the mobile electronic device420from each beacon device410a-cfor which the plurality of beacon signals412a-cis received. FIG.4Billustrates another example environment450in which a location of a mobile electronic device470is determined, according to some embodiments. Environment450includes a plurality of beacon devices460a-cand a mobile electronic device470. Mobile electronic device470is also able to communicate with remote computer system480over wireless network494. In accordance with some embodiments, beacon devices460a-care implemented as beacon device100ofFIG.1and remote computer system480is implemented as electronic device300ofFIG.3. It should be appreciated that mobile electronic device470can be implemented as electronic device200ofFIG.2or electronic device300ofFIG.3. As illustrated, three beacon devices460a-care placed within an environment at known locations. In one embodiment, the beacon devices460a-care stationary and do not move. In another embodiment, at least one of beacon devices460a-cis mobile, but has a known location during transmission of their identifier. Mobile electronic device470receives beacon signals462a-cfrom beacons460a-cwhen mobile electronic device470moves into range for receiving the signals from a respective beacon device460a-c. Beacon signal462a-cincludes an identifier of beacon device460a-cthat is transmitting a respective beacon signal462a-c. Mobile electronic device470measures a signal strength for each beacon signal462a-c. In one embodiment, mobile electronic device470transmits the signal strength measurements and identifiers for each beacon device460a-cto remote computer system480. It should be appreciated that mobile electronic device470may perform all processing for location determination, provided the internal componentry allows for such processing. For purposes of the illustrated embodiment, all processing for location determination based on measured signal strength and identifiers for beacons460a-cis performed at remote computer system480. Remote computer system480determines a distance of each beacon device460a-cfrom mobile electronic device470. In one embodiment, the distance is determined based in part on the received signal strength of beacon signals462a-c. In one embodiment, beacon devices460a-cinclude associated calibration factors. The distance of mobile electronic device470to each beacon device460a-ccan be adjusted according to the calibration factor. Remote computer system480determines a location of the mobile electronic device470based at least on part on the distance of the mobile electronic device470from each beacon device460a-cfor which the plurality of beacon signals462a-cis received. FIG.5illustrates an example map505of an environment500in which a location of a mobile electronic device is determined, according to some embodiments. In one embodiment, map505is displayed at a computer system (e.g., a remote management system). In one embodiment, map505is an image file that is uploaded to the computer system (e.g., a bitmap file). The map is scaled such that the dimensions of the environment captured in the map are consistent with the actual dimensions. Beacon markings (e.g., beacon markings510a-e) are indicated in map505and are representative of their known location. It should be appreciated that any number of beacon devices may be placed within an environment, and that the illustrated placement of beacon devices is an example. In accordance with various embodiments, once the locations of beacons are known in an environment500, virtual fences and/or virtual areas (collectively referred to as “virtual zones”) may be detected, generated or manipulated. Virtual fences (e.g., virtual fence530and532) refer to artificially created boundaries (e.g., a virtual wall) defined within the physical boundaries of the environment. Virtual zones (e.g., virtual areas535and540) refer to artificially created zones defined within the physical boundaries of the environment. It should be appreciated that a virtual area can be conceived of comprising multiple virtual fences. For example, environment500is a building including a number of rooms and hallways. Virtual fence530is placed at an opening to the building and virtual fence532is placed at an opening between a room and a hallway. Moreover, virtual areas535and540are placed at rooms within the building. It should be appreciated that different configurations of virtual areas and virtual fences may be use. For example, the virtual fences may coincide with the walls separating the four rooms such that each room is defined to be a zone. Alternatively, the virtual fences may define zones that subdivide rooms into more than one zone or place a plurality of rooms in the same zone. The described embodiments provide a solution that delivers precise “GPS-Free” indoor/outdoor user location tracking and events with radial and geometric zones for any physical location and layout. A user may interact with map505and visually draw the layout of the physical location and beacon placement to track location information down to inches. The user may create radial and rectangular zones and track intersections, enters, and exits with call-back events. In some embodiments, a computer system (e.g., remote computer system480) may be used to manipulate the virtual fences in environment500on the fly. For example, environment500may be an indoor structure with a plurality of rooms where access is limited due to security reasons during an event. During the event, the computer system may be employed to manipulate or move the virtual fences to encompass more or less of the limited access area. In one embodiment, access to the limited access areas may be controlled by networked door locks or smart door locks. For example, an individual authorized to enter the limited access area may carry a mobile electronic device such as a smart phone or an electronic wristband. When the system detects that the mobile electronic device is proximity to the smart lock, then the smart lock may be automatically opened. In this example, a person's or a group's access may be changed on the fly. The computer system may allow a lay person or a person who is not experienced in creating and manipulating virtual fences to perform such operations via a user friendly interface. The interface may display map505of environment500with an overlay of the virtual fences. A user may manipulate the virtual fences by clicking and dragging or using other commands in the interface. The interface comprises tools to outlay zones for events within the environment. The virtual fences or zones may be quickly generated by users without the user being required to program code. The described embodiments may be used in many different types of environments500. For example, environment500may be a retail environment and the virtual fences and virtual areas may track the movements of a shopper within the environment. A virtual fence may be encountered by a shopper in a specific location of the store and then trigger a coupon to be sent to the shopper (e.g., for display on the mobile device515aassociated with the shopper). In another example, the encounter with a virtual fence may result in directions being sent to a user (e.g., for display on the mobile device515aassociated with the user). Embodiments described herein may be employed in retail, healthcare, manufacturing, entertainment venues, amusement parks, museums, etc. For example, in a manufacturing environment, the described embodiments may be used to track workers, machines, may provide access to turn on machines, provide proximity based authentication, security clearance, define safety zones, etc. In one embodiment, the described embodiments are combined with biometrics on devices to further identify which individual is in which location associated with the mobile device for security purposes. The mobile devices515aand515bare electronic devices that are mobile within environment500and may be more sophisticated than a stationary beacon. Mobile devices515aand515bserves a different purpose than the computer system as described herein that is employed to generate or manipulate the virtual fences and virtual areas where mobile devices515aand515bare tracked by the system within the environment500. When a mobile electronic device encounters a virtual fence or virtual area, the system may cause an event to occur such as a unlocking a door or causing an alarm to sound. The mobile electronic device may be, but is not limited to, a smart phone executing an app specific to the present technology, a handheld device, a wearable device, a tablet, a smart watch, etc. In one embodiment, a mobile electronic device or a remote computer system employs beacon devices and triangulation to determine a location of the mobile electronic device within environment500. In other embodiments, a probabilistic model approach is used to determine a location of the mobile electronic device within environment500. For instance, beacon signals may be noisy, precluding the use of a typical triangulation algorithm. In one embodiment, Monte Carlo Localization (MCL) is used to estimate with more accuracy a real position and an estimation of its trajectory in the environment. For example, MCL may be used to localize using a particle filter given a map of the environment. For example, this may occur in an environment where GPS does not function. In one embodiment, a minimum of three beacons are used in environment500. In a larger environment, more than three beacons may be required. For example, if a beacon device's transmission range is 100 feet and an environment has dimensions larger than 100 feet, more than three beacons will be needed. It should be appreciated that any number of beacons may be placed within an environment. Prior attempts to use low power beacons to determine location in an environment encountered problems based on the variability of different beacons. For example, different beacon manufacturers use different batteries, are prone to different types of interference or noise, are affected by temperature differently, etc. The described embodiments have overcome these problems with the development of a software stack as will be explained below. The described embodiments are able to accurately determine the location of a beacon in an environment with an accuracy measured in inches rather than feet. In one embodiment, the present technology is able to receive and/or measure all of the noise from a plurality of beacons and other devices in a given environment and average this noise to determine an accurate signal strength determination for each beacon. Such a technique employs signal to noise ratios (SNR). Example Location Determination of a Mobile Electronic Device Using Beacon Devices FIG.6illustrates a flow diagram600of an example method for determining a location of an electronic device, according to various embodiments. Procedures of this method will be described with reference to elements and/or components of various figures described herein. It is appreciated that in some embodiments, the procedures may be performed in a different order than described, that some of the described procedures may not be performed, and/or that one or more additional procedures to those described may be performed. Flow diagram600includes some procedures that, in various embodiments, are carried out by one or more processors under the control of computer-readable and computer-executable instructions that are stored on non-transitory computer-readable storage media. It is further appreciated that one or more procedures described in flow diagram600may be implemented in hardware, or a combination of hardware with firmware and/or software. In one embodiment, at procedure605of flow diagram600, at least one beacon device is calibrated such that the received signal strengths for the plurality of beacon devices are normalized. In one embodiment, as shown at procedure608, a calibration factor for at least one beacon device of the plurality of beacon devices is determined. For example, the beacon devices are calibrated such that for a predetermined distance between the electronic device and each beacon device, the same received signal strength for a particular beacon multiplied by the calibration factor for that particular beacon will provide consistent results across all beacons. In one embodiment, the calibration is performed automatically in response to a proximity event between the electronic device and a beacon device. In one embodiment, a proximity event is identified when the electronic device detects a signal from a beacon device and is located within a specific distance from the beacon device (e.g., the electronic device gets within a certain distance of a beacon device). The calibration calculates a calibration factor based on the known distances between the beacon closest to the electronic device and to a plurality of other beacons. In one embodiment, the calibration factor is calculated according to the following equation 1: Calibration⁢⁢Factor=n⁡(-10*log*(d))-(SS-A)(1) Where n is the signal propagation constant, d is the known real distance between the beacon and the electronic device, SS is the measured signal strength, and A is the theoretical transmit power of the beacon. In one embodiment, during automatic calibration, beacon devices are calibrated in relation to the beacon device physically closest one to the electronic device at one specific moment. The proximity event moment arises when the electronic device detects that it is at a specific distance (e.g., 1.5 meters) to the closest beacon and is within that distance for a period of time (e.g., 2 seconds). The automatic calibration provides a calibration factor for each beacon device each time the electronic device passes by a beacon device within the specific distance. To calculate the calibration factor, equation 1 is used with a distance constant and using the actual distance between each beacon devices and the closest beacon device (e.g., real measures from known locations of beacon devices. In another embodiment, manual calibration is performed. For example, manual calibration may be performed at system setup (e.g., one time), at regular time intervals (e.g., monthly), or during system maintenance (e.g., beacon device firmware is updated or beacon devices are moved/replaced). During manual calibration, one or more points in a map of the environment are selected, and the system is calibrated from those selected points, passing to the system the actual coordinates. This results in a calibration factor for each beacon device. In one embodiment, at procedure610of flow diagram600, a plurality of beacon signals are received from a plurality of beacon devices at the electronic device, wherein each beacon signal of the plurality of beacon signals includes an identity of a beacon device transmitting a respective beacon signal, and wherein each beacon device of the plurality of beacon devices has a known location. It should be appreciated that beacon signals may be received from any beacon device within range of the receiver of the electronic device (e.g., receiver240or I/O device320) At procedure620, a received signal strength for each beacon signal of the plurality of beacon signals is measured. In one embodiment, the receiver (e.g., receiver240or I/O device320) is configured to measure the received signal strength. At procedure625, in one embodiment, the received signal strength for each beacon signal of the plurality of beacon signals and the identity of the beacon device transmitting a respective beacon signal are communicated to a remote computing device. It should be appreciated that procedure625is optional, and may be dependent on the functionality of the electronic device. In one embodiment, where the received signal strength and identity information is communicated to a remote computer system, at least procedures630and640are performed at the remote computer system. In one embodiment, where procedure625is not performed, at least procedures630and640are performed at the electronic device. At procedure630, a distance of the electronic device from each beacon device for which the plurality of beacon signals is received is determined, wherein the distance of the electronic device from a beacon device is based at least in part on the received signal strength of the beacon signal transmitted by the beacon device. In one embodiment, the relationship between received signal strength and distance is calculated according to the following equation 2: SS=(-10*n*log⁡(d)+A)(2) Where SS is the measured signal strength, n is the signal propagation constant, d is the distance between the beacon and the electronic device, and A is the theoretical transmit power of the beacon. Solving equation 2 for d provides the distance determination. In one embodiment, a beacon device of the plurality of beacon devices has an associated calibration factor. In one embodiment, as shown at procedure635, the distance of the electronic device from each beacon device for which the plurality of beacon signals is received is adjusted according to the associated calibration factor for each beacon device. In one embodiment, the relationship between received signal strength and distance is calculated according to the following equation 3: SS=(-10*n*log⁡(d)+A-Calibration⁢⁢Factor)(3) Where n is the signal propagation constant, d is the distance between the beacon and the electronic device, SS is the measured signal strength, A is the theoretical transmit power of the beacon, and Calibration Factor is the calibration factor for the beacon. Solving equation 3 for d provides the distance determination. At procedure638, the beacon devices are sorted according to distance. A list of beacon devices sorted by distance is arranged such that the first element is the nearest beacon device to the electronic device, where the distance is taken from the distance determined at procedure630. The following beacon devices in the list are the nearest beacon devices to the first beacon device one based on the actual distances between the beacon devices (e.g., according to the known placement or the map.) The list of beacons sorted according to distance allows for the use of the closest beacon devices to the electronic device for determining the location of the electronic device. For example, due to distance, walls, noise, etc., the position of a particular beacon device may be inaccurate. In such situations, the list of beacon devices sorted by distance allows for the use of the closest beacon devices regardless of the estimated or determined distance. At procedure640, a location of the electronic device is determined based at least on part on the distance of the electronic device from each beacon device for which the plurality of beacon signals is received. In one embodiment, the location is determined based on a triangulation of the distances between three beacons and the electronic device. It should be appreciated that any number of beacons can be used, where less than three beacons will provide a proximity determination and more than three beacons will provide a location determination with higher accuracy. It should also be appreciated that embodiments described herein may be implemented in environments having more than one story (e.g., a two-story or taller building). As the known location of each beacon will also have a known three-dimensional position, the more beacon signals that are received at the electronic device, the greater the accuracy of the location determination. In one embodiment, the location of the electronic device is determined within a map of an environment. As shown at procedure642, a particle filter is applied to possible locations of the electronic device with the map of the environment. At procedure644, the location is determined based on a comparison of the distance of the electronic device from each beacon device for which the plurality of beacon signals is received to the map of the environment. Applying a particle filter provides for solving nonlinear filtering problems arising in signal processing and Bayesian statistical inference. The filtering problem occurs due to estimating the internal states in dynamical systems when partial observations are made, and random perturbations are present in the sensors as well as in the dynamical system. The objective is to compute the conditional probability of the states of some process, given some noisy and partial observations. Solving large scale systems, unstable processes or when the nonlinearities are not sufficiently smooth. In one embodiment, Monte Carlo Localization (MCL) is used to localize using a particle filter given a map of the environment. MCL estimates the position and orientation of an electronic device as it moves and senses the environment. MCL uses a particle filter to represent the distribution of likely states, with each particle representing a possible state, e.g., a hypothesis of where the electronic device is. MCL typically starts with a uniform random distribution of particles over the configuration space, meaning there is no information about where the electronic device is and assumes it the electronic device is equally likely to be at any point in space. Whenever the electronic device moves, the particles are shifted to predict its new state after the movement. Whenever the electronic device senses something, the particles are resampled based on recursive Bayesian estimation, e.g., on how well the actual sensed data correlate with the predicted state. Ultimately, the particles should converge towards the actual location of the electronic device. Moreover, using the list of beacon devices sorted by distance data allows the MCL to converge towards the actual location more quickly than without the sorted list. In accordance with various embodiments, smoothing of results and a path position may be performed. For instance, different filters (e.g., Kalman filters) to adapt the estimate raw position to the special characteristics of the current map may be utilized. This might improve performance where boundaries, added paths (e.g., given the position in the path), possible jumps of the position due to slight sudden movements, etc., impact the accuracy of the results. FIG.7illustrates a flow diagram of an example flow diagram700for managing an environment, according to various embodiments. Procedures of this method will be described with reference to elements and/or components of various figures described herein. It is appreciated that in some embodiments, the procedures may be performed in a different order than described, that some of the described procedures may not be performed, and/or that one or more additional procedures to those described may be performed. Flow diagram700includes some procedures that, in various embodiments, are carried out by one or more processors under the control of computer-readable and computer-executable instructions that are stored on non-transitory computer-readable storage media. It is further appreciated that one or more procedures described in flow diagram700may be implemented in hardware, or a combination of hardware with firmware and/or software. In one embodiment, at procedure710of flow diagram700, the map (e.g., map505) of the environment is displayed. In one embodiment, as shown at procedure720, virtual zone placement (e.g., virtual areas or virtual fences) is received at the map. At procedure730, the location of the electronic device (e.g., mobile device515aor515b) within the map of the environment is displayed. For example, the location determined in flow diagram600is displayed as an icon representing the electronic device within the map. In one embodiment, at procedure740, where the map includes at least one virtual zone, an event is detected responsive to determining that the electronic device encounters the virtual zone. For example, with reference toFIG.5, an event is detected when mobile device515aor515bencounters a virtual zone. For instance, an event is detected if a mobile device crosses a virtual fence530or an event is detected if a mobile device enters virtual area540. In one embodiment, at procedure750, an action is initiated responsive to the detecting the event. In one embodiment, as shown at procedure752, information is communicated to the electronic device. For example, where the environment is a museum, an action may cause information related to a particular exhibit to be displayed on the electronic device or an audible recording may be played to provide information about the exhibit. In one embodiment, as shown at procedure754, an alert is generated. For example, if the environment includes rooms that are secure locations, an alert to security personnel may be generated in response to an electronic device entering a secure location. It should be appreciated that many different types of actions may be initiated, depending on the environment and the location of virtual zones. Conclusion The examples set forth herein were presented in order to best explain, to describe particular applications, and to thereby enable those skilled in the art to make and use embodiments of the described examples. However, those skilled in the art will recognize that the foregoing description and examples have been presented for the purposes of illustration and example only. Many aspects of the different example embodiments that are described above can be combined into new embodiments. The description as set forth is not intended to be exhaustive or to limit the embodiments to the precise form disclosed. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims. Reference throughout this document to “one embodiment,” “certain embodiments,” “an embodiment,” “various embodiments,” “some embodiments,” or similar term means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, the appearances of such phrases in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics of any embodiment may be combined in any suitable manner with one or more other features, structures, or characteristics of one or more other embodiments without limitation.
56,193
11860292
DETAILED DESCRIPTION OF THE EMBODIMENTS InFIG.1, a schematic view of a first embodiment of a detector110for determining a position of at least one object112is depicted. The detector110comprises at least two optical sensors113, for example a first optical sensor118and a second optical sensor120, each having at least one light-sensitive area121. In this case, the object112comprises a beacon device114, from which a light beam116propagates towards the first optical sensor118and the second optical sensor120. The first optical sensor118may comprise a first light-sensitive area122, and the second optical sensor120may comprise a second light-sensitive area124. The light beam116, as an example, may propagate along an optical axis126of the detector110. Other embodiments, however, are feasible. The optical detector110, further, comprises at least one transfer device128, such as at least one lens or a lens system, specifically for beam shaping. The transfer device128has at least one focal length in response to the incident light beam116propagating from the object112to the detector110. The transfer device128has an optical axis129, wherein the transfer device128and the optical detector preferably may have a common optical axis. The transfer device128constitutes a coordinate system. A direction parallel or anti-parallel to the optical axis126,129may be defined as a longitudinal direction, whereas directions perpendicular to the optical axis126,129may be defined as transversal directions, wherein a longitudinal coordinate l is a coordinate along the optical axis126,129and wherein d is a spatial offset from the optical axis126,129. Consequently, the light beam116is focused, such as in one or more focal points130, and a beam width of the light beam116may depend on a longitudinal coordinate z of the object112, such as on a distance between the detector110and the beacon device114and/or the object112. The optical sensors118,120may be positioned off focus. For details of this beam width dependency on the longitudinal coordinate, reference may be made to one or more of WO 2012/110924 A1 and/or WO 2014/097181 A1. In this first preferred embodiment the optical sensors118,120may be arranged such that the light-sensitive areas122,124differ in their longitudinal coordinate and/or their surface areas and/or their surface shapes. As can be seen inFIG.1, the first optical sensor118is a small optical sensor, whereas the second optical sensor120is a large optical sensor. Thus, the width of the light beam116fully covers the first light-sensitive area122, whereas, on the second light-sensitive area124, a light spot is generated which is smaller than the second light-sensitive area124, such that the light spot is fully located within the second light-sensitive area124. As an example, the first light-sensitive area122may have a surface area of 1 mm2to 100 mm2, whereas the second light-sensitive area124may have a surface area of 50 to 600 mm2. Other embodiments, however, are feasible. The first optical sensor118, in response to the illumination by the light beam116, may generate a first sensor signal s1, whereas the second optical sensor120may generate a second sensor signal s2. Preferably, the optical sensors118,120are linear optical sensors, i.e. the sensor signals s1and s2each are solely dependent on the total power of the light beam116or of the portion of the light beam116illuminating their respective light-sensitive areas122,124, whereas these sensor signals s1and s2are independent from the actual size of the light spot of illumination. In other words, preferably, the optical sensors118,120do not exhibit the above-described FiP effect. The sensor signals s1and s2are provided to an evaluation device132of the detector110. The evaluation device132, as symbolically shown inFIG.1, is embodied to derive a quotient signal Q, as explained above. From the quotient signal Q, derived by dividing the sensor signals s1and s2or multiples or linear combinations thereof, may be used for deriving at least one item of information on a longitudinal coordinate z of the object112and/or the beacon device114, from which the light beam116propagates towards the detector110. For further details of this evaluation, reference is made toFIGS.3and4below. The detector110, in combination with the at least one beacon device114, may be referred to as a detector system134, as will be explained in further detail below with reference toFIG.5. InFIG.2, a modification of the embodiment ofFIG.1is shown, which forms an alternative detector110. The alternative embodiment of the detector110widely corresponds to the embodiment shown inFIG.1. Instead of using an active light source, i.e. a beacon device114with light-emitting properties for generating the light beam116, however, the detector110comprises at least one illumination source136. The illumination source136, as an example, may comprise a laser, whereas, inFIG.1, as an example, the beacon device114may comprise a light-emitting diode (LED). The illumination source136may be configured for generating at least one illumination light beam138for illuminating the object112. The illumination light beam138is fully or partially reflected by the object112and travels back towards the detector110, thereby forming the light beam116. As shown inFIG.2, as an example, the illumination light beam138may be parallel to the optical axis126of the detector110. Other embodiments, i.e. off-axis illumination and/or illumination at an angle, are feasible, too. In order to provide an on-axis illumination, as shown inFIG.2, as an example, one or more reflective elements140may be used, such as one or more prisms and/or mirrors, such as dichroitic mirrors, such as movable mirrors or movable prisms. Apart from these modifications, the setup of the embodiment inFIG.2corresponds to the setup inFIG.1. Thus, again, an evaluation device132may be used, having, e.g., at least one divider142for forming the quotient signal Q, and, as an example, at least one position evaluation device144, for deriving the at least one longitudinal coordinate z from the quotient signal Q. It shall be noted that the evaluation device132may fully or partially be embodied in hardware and/or software. Thus, as an example, one or more of components142,144may be embodied by appropriate software components. It shall further be noted that the embodiments shown inFIGS.1and2simply provide embodiments for determining the longitudinal coordinate z of the object112. It is also feasible, however, to modify the setups ofFIGS.1and2to provide additional information on a transversal coordinate of the object112and/or of parts thereof. As an example, e.g. in between the transfer device128and the optical sensors118,120, one or more parts of the light beam116may be branched off, and may be guided to a position-sensitive device such as one or more CCD and/or CMOS pixelated sensors and/or quadrant detectors and/or other position sensitive devices, which, from a transversal position of a light spot generated thereon, may derive a transversal coordinate of the object112and/or of parts thereof. The transversal coordinate may be used to verify and/or enhance the quality of the distance information. For further details, as an example, reference may be made to one or more of the above-mentioned prior art documents which provide for potential solutions of transversal sensors. InFIGS.3and4typical quotient signals Q are depicted, as a function of the longitudinal coordinate z of an object112in a test setup. Therein, a simple quotient s1/s2is shown, for an exemplary setup of the detector110.FIGS.3and4, each, show a bundle of experiments which are not resolved in these figures. Thus, inFIG.3, various curves are given for the setup shown inFIG.1, with an active beacon device114having an LED. The current of the LED target of the beacon device114, in this experiment, is changed from 1000 mA to 25 mA. Basically, no difference in the quotient signal, as a function of the longitudinal coordinate z (given in mm) can be detected over the spatial measurement range of 250 mm to 2,250 mm. The experiment clearly shows that the setup of the detector110according to the present invention is independent from the total power of the light beam116. Thus, no additional information on the total power of the light beam, and, thus, no additional illumination on the luminance is required in order to derive the longitudinal coordinate. Thus, as shown inFIG.3, as an example, a unique relationship between a quotient signal Q* as measured in an experiment and a longitudinal coordinate e exists. Thus, the curves as shown inFIG.3, as an example, may be used as calibration curves for indicating a unique and predetermined or determinable relationship between the quotient signal Q and the longitudinal coordinate. The curves as shown inFIG.3, as an example, may be stored in a data storage and/or in a lookup table. The calibration curves Q may simply be determined by calibration experiments. It is also feasible, however, to derive these curves by one or more of modelling, analytically, semi-empirically and empirically. The experiment shown inFIG.3clearly demonstrates that the setup of the detector110according to the present invention provides a large range of measurement, both in terms of space (e.g. a measurement range from 270 to 2,250 mm) and in terms of brightness or total power of the light beam116. InFIG.4, an additional experiment is shown which demonstrates that the setup is widely independent from the target size, i.e. the lateral diameter of the beacon device114. For this experiment, again, an LED beacon device114was used, similar to the setup shown inFIG.1, wherein the size of the target, i.e. the visible part of the LED, was changed by using a diffuser and an adjustable aperture. Thereby, the aperture or the size of the target was varied from 1 mm to 25 mm in diameter. Without resolving the curves shown inFIG.4in detail, it is clearly visible that the quotient signal Q, again, is widely independent from the target size, in between a target size of 1 mm to 25 mm. Thus, again, a unique relationship between the quotient signal Q and the longitudinal coordinate z can be derived, for various target sizes, which may be used for evaluation. The results shown inFIGS.3and4were derived experimentally, by varying the named parameters and by measuring appropriate signals. The results, however, may also be derived analytically, semi-analytically or by modelling. Comparable results were obtained. FIG.5shows, in a highly schematic illustration, an exemplary embodiment of a detector110, e.g. according to the embodiments shown inFIG.1or2. The detector110specifically may be embodied as a camera146and/or may be part of a camera146. The camera146may be made for imaging, specifically for 3D imaging, and may be made for acquiring standstill images and/or image sequences such as digital video clips. Other embodiments are feasible. FIG.5further shows an embodiment of a detector system134, which, besides the at least one detector110, comprises one or more beacon devices114, which, in this example, may be attached and/or integrated into an object112, the position of which shall be detected by using the detector110.FIG.5further shows an exemplary embodiment of a human-machine interface148, which comprises the at least one detector system134and, further, an entertainment device150, which comprises the human-machine interface148. The figure further shows an embodiment of a tracking system152for tracking a position of the object112, which comprises the detector system134. The components of the devices and systems shall be explained in further detail below. FIG.5further shows an exemplary embodiment of a scanning system154for scanning a scenery comprising the object112, such as for scanning the object112and/or for determining at least one position of the at least one object112. The scanning system154comprises the at least one detector110, and, further, optionally, the at least one illumination source136as well as, optionally, at least one further illumination source136. The illumination source136, generally, is configured to emit at least one illumination light beam138, such as for illumination of at least one dot, e.g. a dot located on one or more of the positions of the beacon devices114and/or on a surface of the object112. The scanning system154may be designed to generate a profile of the scenery including the object112and/or a profile of the object112, and/or may be designed to generate at least one item of information about the distance between the at least one dot and the scanning system154, specifically the detector110, by using the at least one detector110. As outlined above, an exemplary embodiment of the detector110which may be used in the setup ofFIG.5is shown inFIGS.1and2. Thus, the detector110, besides the optical sensors118,120, comprises at least one evaluation device132, having e.g. the at least one divider142and/or the at least one position evaluation device144, as symbolically depicted inFIG.5. The components of the evaluation device132may fully or partially be integrated into a distinct device and/or may fully or partially be integrated into other components of the detector110. Besides the possibility of fully or partially combining two or more components, one or more of the optical sensors118,120and one or more of the components of the evaluation device132may be interconnected by one or more connectors156and/or by one or more interfaces, as symbolically depicted inFIG.5. Further, the one or more connectors156may comprise one or more drivers and/or one or more devices for modifying or preprocessing sensor signals. Further, instead of using the at least one optional connector156, the evaluation device132may fully or partially be integrated into one or both of the optical sensors118,120and/or into a housing158of the detector110. Additionally or alternatively, the evaluation device132may fully or partially be designed as a separate device. In this exemplary embodiment, the object112, the position of which may be detected, may be designed as an article of sports equipment and/or may form a control element or a control device160, the position of which may be manipulated by a user162. As an example, the object112may be or may comprise a bat, a racket, a club or any other article of sports equipment and/or fake sports equipment. Other types of objects112are possible. Further, the user162himself or herself may be considered as the object112, the position of which shall be detected. As outlined above, the detector110comprises at least the optical sensors118,120. The optical sensors118,120may be located inside the housing158of the detector110. Further, the at least one transfer device128is comprised, such as one or more optical systems, preferably comprising one or more lenses. An opening164inside the housing158, which, preferably, is located concentrically with regard to the optical axis126of the detector110, preferably defines a direction of view166of the detector110. A coordinate system168may be defined, in which a direction parallel or anti-parallel to the optical axis126may be defined as a longitudinal direction, whereas directions perpendicular to the optical axis126may be defined as transversal directions. In the coordinate system128, symbolically depicted inFIG.5, a longitudinal direction is denoted by z, and transversal directions are denoted by x and y, respectively. Other types of coordinate systems168are feasible, such as non-Cartesian coordinate systems. The detector110may comprise the optical sensors118,120as well as, optionally, further optical sensors. The optical sensors118,120preferably are located in one and the same beam path, one behind the other, such that the first optical sensor118covers a portion of the second optical sensor120. Alternatively, however, a branched beam path may be possible, with additional optical sensors in one or more additional beam paths, such as by branching off a beam path for at least one transversal detector or transversal sensor for determining transversal coordinates of the object112and/or of parts thereof. Alternatively, however, the optical sensors118,120may be located at the same longitudinal coordinate. One or more light beams116are propagating from the object112and/or from one or more of the beacon devices114, towards the detector110. The detector110is configured for determining a position of the at least one object112. For this purpose, as explained above in the context ofFIGS.1to4, the evaluation device132is configured to evaluate sensor signals provided by the optical sensors118,120. The detector110is adapted to determine a position of the object112, and the optical sensors118,120are adapted to detect the light beam116propagating from the object112towards the detector110, specifically from one or more of the beacon devices114. In case no illumination source136is used, the beacon devices114and/or at least one of these beacon devices114may be or may comprise active beacon devices with an integrated illumination source such as a light-emitting diode. In case the illumination source136is used, the beacon devices114do not necessarily have to be active beacon devices. Contrarily, a reflective surface of the object112may be used, such as integrated reflected beacon devices114having at least one reflective surface such as a mirror, retro reflector, reflective film, or the like. The light beam116, directly and/or after being modified by the transfer device128, such as being focused by one or more lenses, illuminates the light-sensitive areas122,124of the optical sensors118,120. For details of the evaluation, reference may be made toFIGS.1to4above. As outlined above, the determination of the position of the object112and/or a part thereof by using the detector110may be used for providing a human-machine interface148, in order to provide at least one item of information to a machine170. In the embodiments schematically depicted inFIG.5, the machine170may be a computer and/or may comprise a computer. Other embodiments are feasible. The evaluation device132may even be fully or partially integrated into the machine170, such as into the computer. As outlined above,FIG.5also depicts an example of a tracking system152, configured for tracking the position of the at least one object112and/or of parts thereof. The tracking system152comprises the detector110and at least one track controller172. The track controller172may be adapted to track a series of positions of the object112at specific points in time. The track controller172may be an independent device and/or may be fully or partially integrated into the machine170, specifically the computer, as indicated inFIG.5and/or into the evaluation device132. Similarly, as outlined above, the human-machine interface148may form part of an entertainment device150. The machine170, specifically the computer, may also form part of the entertainment device150. Thus, by means of the user162functioning as the object112and/or by means of the user162handling a control device160functioning as the object112, the user162may input at least one item of information, such as at least one control command, into the computer, thereby varying the entertainment functions, such as controlling the course of a computer game. InFIG.6, a schematic view of a further embodiment of the detector110for determining a position of at least one object112is depicted. In this case, the object112comprises the at least one beacon device114, from which the light beam116propagates towards at least one sensor element115. The sensor element115comprises a matrix117of optical sensors113, each optical sensor113having at least one light-sensitive area121facing the object112. In this second preferred embodiment the optical sensors118,120may be arranged such that the light-sensitive areas of the optical sensors113differ in spatial offset and/or surface areas. The light beam116, as an example, may propagate along the optical axis126of the detector110. Other embodiments, however, are feasible. The optical detector110comprises the at least one transfer device128, such as at least one lens and/or at least one lens system, specifically for beam shaping. Consequently, the light beam116may be focused, such as in one or more focal points130, and a beam width of the light beam116may depend on the longitudinal coordinate z of the object112, such as on the distance between the detector110and the beacon device114and/or the object112. The transfer device128constitutes the optical axis129, wherein the transfer device128and the optical detector preferably may have a common optical axis. Consequently, the light beam116is focused, such as in one or more focal points130, and a beam width of the light beam116may depend on a longitudinal coordinate z of the object112, such as on a distance between the detector110and the beacon device114and/or the object112. The optical sensors118,120are positioned off focus. For details of this beam width dependency on the longitudinal coordinate, reference may be made to one or more of WO 2012/110924 A1 and/or WO 2014/097181 A1. As can be seen inFIG.6, the light beam116generates a light spot131on the matrix117. InFIG.8, an exemplary view of the light spot131on the matrix117is shown. As can be seen, in this exemplary embodiment, the matrix117specifically may be a rectangular matrix, with rows numbered by “i”, from 1 to n, and with columns, denoted by “j”, from 1 to m, with n, m being integers. The center of the light spot131, in this exemplary embodiment, is located in the sensor element denoted by i*, j*. The optical sensors113may provide sensor signals sijto an evaluation device132which, out of the sensor signals, determines at least one center signal, denoted symbolically by si*j*. As outlined in further detail above, for generating the center signal, the evaluation device132may comprise at least one center detector133. As an example, the center detector133simply may determine the maximum sensor signal out of the plurality of sensor signals generated by the optical sensors113. Alternative methods are feasible. Thus, as an example, instead of determining a single maximum optical sensor signal, a plurality of sensor signals may be used for generating the center signal. Thus, as an example, neighboring optical sensors which are adjacent to the optical sensor i*, j* may contribute to the center signal, such as optical sensors with the coordinates i*−1, . . . , i*+1 and j*−1, . . . , j*+1. These coordinates, in the simple exemplary embodiment, may form a square around the optical sensor i*, j*. Instead of a square having a side length of 3, as in this embodiment, other environments around the optical sensor having the highest sensor signal may be used such as to optimize the signal to noise ratio of the detector signal and or of the distance information. Further, additionally or alternatively, the center signal may be generated by adding up and/or averaging over sensor signals within a certain range from the maximum sensor signal which may for example be beneficial to the measurement precision concerning noise such as pixel noise. Further, additionally or alternatively, for the determination of the center signal or sum signal, image processing techniques such as subpixel processing, interpolation, normalization or the like may be employed. Other alternatives are feasible. The evaluation device132may be adapted to determine the center signal by integrating of the plurality of sensor signals, for example the plurality of optical sensors around the optical sensor having the highest sensor signal. For example, the beam profile may be a trapezoid beam profile and the evaluation device132may be adapted to determine an integral of the trapezoid, in particular of a plateau of the trapezoid. Further, when trapezoid beam profiles may be assumed, the evaluation device132may be adopted to determine the edge and center signals by equivalent evaluations making use of properties of the trapezoid beam profile such as determination of the slope and position of the edges and of the height of the central plateau and deriving edge and center signals by geometric considerations. Additionally or alternatively, the evaluation device132may be adapted to determine one or both of center information or edge information from at least one slice or cut of the light spot. This may be realized for example by replacing the area integrals in the quotient signal Q by a line integrals along the slice or cut. For improved accuracy, several slices or cuts through the light spot may be used and averaged. In case of an elliptical spot profile, averaging over several slices or cuts may result in an improved distance information. Further, the evaluation device132is configured for determining at least one sum signal out of the sensor signals of the matrix117. For this purpose, the evaluation device132may comprise at least one summing device135. The summing device135may be configured for adding up, integrating or averaging over the sensor signals of the entire matrix117, of a region of interest within the matrix117, each option with or without the optical sensors from which the center signal is generated. Thus, in the exemplary embodiment shown inFIG.8, the summing device135is simply configured for summing over the sensor signals sijof the entire matrix117, except for the center optical detector with the coordinates i*, j*. Other options, however, are feasible. The evaluation device132may be adapted to determine the sum signal by integrating of signals of the entire matrix117, of the region of interest within the matrix117. For example, the beam profile may be a trapezoid beam profile and the evaluation device132may be adapted to determine an integral of the entire trapezoid. The evaluation device132may be adapted to determine at least one region of interest within the matrix, such as one or more pixels illuminated by the light beam which are used for determination of the longitudinal coordinate of the object. For example, the evaluation device may be adapted to perform at least one filtering, for example at least one object recognition method. The region of interest may be determined manually by a user or maybe determined automatically, such as by recognizing an object within an image generated by the optical sensors. The evaluation device132further is configured for forming at least one combined signal out of the center signal and the sum signal. For this purpose, the evaluation device132, as an example, may comprise at least one combining device137, such as at least one divider142. As a very simple embodiment, a quotient Q may be formed, by dividing the center signal by the sum signal or vice versa. Other options are feasible and are given above. Finally, the evaluation device132is configured for determining at least one longitudinal coordinate z of the object by evaluating the combined signal. For this purpose, the evaluation device may comprise at least one further component, such as at least one evaluation component, for example a position evaluation device144. It shall be noted that the components of the evaluation device132shown inFIG.8may fully or partially be embodied in hardware and/or software. Further, the components may fully or partially be embodied as independent or separate components, and/or may fully or partially be embodied as components which are integrated into the sensor element115. The embodiment ofFIG.8further shows that, in addition to the longitudinal coordinate z, at least one item of information on a transversal coordinate of the object112and/or the beacon device114may be generated. Thus, the coordinates i* and j* provide additional items of information on a transversal position of the object112and/or the beacon device114. In the setup ofFIG.6, the beacon device114, for the sake of simplicity, is positioned in the center, i.e. on the optical axis126,129. In this case, the light spot131is likely to be centered in the middle of the matrix117. In the embodiment shown inFIG.8, however, as can easily be detected, the light spot131is off-centered. This off-centering is characterized by the coordinates i*, j*. By using known optical relationships between this off-centering and a transversal position of the object112and/or the beacon device114, such as by using the lens equation, at least one transversal coordinate of the object112and/or the beacon device114may be generated. This option is also shown in the exemplary embodiment ofFIG.8. InFIG.7, a modification of the embodiment ofFIG.6is shown, which forms an alternative detector110. The alternative embodiment of the detector110widely corresponds to the embodiment shown inFIG.6. Instead of using an active beacon device114with light-emitting properties for generating the light beam116, however, the detector110itself comprises at least one illumination source136. The illumination source136, as an example, may comprise at least one laser, whereas, inFIG.6, as an example, the beacon device114may comprise a light-emitting diode (LED). Other embodiments, however, are feasible. The illumination source136may be configured for generating at least one illumination light beam138for fully or partially illuminating the object112. The illumination light beam138is fully or partially reflected by the object112and travels back towards the detector110, thereby forming the light beam116. As shown inFIG.7, as an example, the illumination light beam138may be parallel to the optical axis126of the detector110. Other embodiments, i.e. off-axis illumination and/or illumination at an angle, are feasible, too. In order to provide an on-axis illumination, as shown inFIG.7, as an example, one or more reflective elements140may be used, such as one or more prisms and/or mirrors, such as dichroitic mirrors, such as movable mirrors or movable prisms. Apart from these modifications, the setup of the embodiment inFIG.7corresponds to the setup inFIG.6. Thus, again, an evaluation device132may be used, having e.g. at least one divider142for forming the quotient signal Q, and, as an example, at least one position evaluation device144, for deriving the at least one longitudinal coordinate z from the quotient signal Q and/or another type of combined signal. It shall be noted that the evaluation device132, again, may fully or partially be embodied in hardware and/or software. Thus, as an example, one or more of components133,135,137,142,144may fully or partially be embodied by appropriate software components and/or may fully or partially be embodied by hardware components. The optical sensors113of the matrix117, as an example, may be pixels of a pixelated optical sensor, such as a CCD and/or a CMOS sensor chip. Thus, as an example, the optical sensors113may have a side length and/or an equivalent diameter in the range of a few micrometers to several hundred micrometers. It shall be noted, however, that larger pixels or optical sensors113may be used. Further, instead of using an integrated sensor element115such as a CCD and/or CMOS sensor chip, non-integrated matrices may be used. FIG.9shows, in a highly schematic illustration, an exemplary embodiment of a detector110, e.g. according to the embodiments inFIG.6or7. The detector110, specifically, may be embodied as the camera146and/or may be part of a camera146. The camera146may be made for imaging, specifically for 3D imaging, and may be made for acquiring standstill images and/or image sequences such as digital video clips. Other embodiments are feasible. FIG.9further shows an embodiment of a detector system134, which, besides the at least one detector110, comprises one or more beacon devices114, which, in this example, may be attached to and/or integrated into an object112, the position of which shall be detected by using the detector110.FIG.9further shows an exemplary embodiment of a human-machine interface148, which comprises the at least one detector system134and, further, an entertainment device150, which comprises the human-machine interface151. The figure further shows an embodiment of a tracking system152for tracking a position of the object112, which comprises the detector system134. The components of the devices and systems shall be explained in further detail below. FIG.9further shows an exemplary embodiment of a scanning system154for scanning a scenery comprising the at least one object112, such as for scanning the object112and/or for determining at least one position of the at least one object112. The scanning system154comprises the at least one detector110and, further, optionally, the at least one illumination source136as well as, optionally, at least one further illumination source136, which is not shown. The illumination source136, generally, may be configured to emit the at least one illumination light beam138, such as for illumination of at least one dot, e.g. a dot located on one or more of the positions of the beacon devices114and/or on a surface of the object112. It shall be noted, however, that an active beacon device, as e.g. shown in the setup ofFIG.6, may also be used, and, thus, that setups with no integrated illumination source136are also feasible. The scanning system154may be designed to generate a profile of the scenery including the object112and/or a profile of the object112and/or may be designed to generate at least one item of information about the distance between the at least one dot and the scanning system154, specifically the detector110, by using the at least one detector110. As outlined above, an exemplary embodiment of the detector110which may be used in the setup ofFIG.9is shown inFIGS.6and7. Thus, the detector110, besides the sensor element115, comprises the at least one evaluation device132, having, e.g., the at least one center detector133, the at least one summing device135, the at least one combining device140, the at least one divider142, the at least one position evaluation device144and/or combinations thereof. These components, which may optionally be present, are symbolically depicted inFIG.9. The components of the evaluation device132may fully or partially be integrated into a distinct device and/or may fully or partially be integrated into other components of the detector110. Besides the possibility of fully or partially combining two or more components, one or more of the components of the evaluation device132and one or more of the components of the sensor element115may be interconnected by one or more connectors156and/or by one or more interfaces, as symbolically depicted inFIG.9. Further, the one or more connectors156may comprise one or more drivers and/or one or more devices for modifying or preprocessing sensor signals. Further, instead of using the at least one optional connector156, the evaluation device132may fully or partially be integrated into one or both of the sensor element115and/or into a housing158of the detector110. Additionally or alternatively, the evaluation device132may fully or partially be designed as a separate device. In this exemplary embodiment, the object112, the position of which may be detected, may be designed as an article of sports equipment and/or may form a control element or a control device160, the position of which may be manipulated by a user162. As an example, the object112may be or may comprise a bat, a racket, a club or any other article of sports equipment and/or fake sports equipment. Other types of objects112are possible. Further, the user162himself or herself may be considered as the object112, the position of which shall be detected. As outlined above, the detector110comprises at least the sensor element115. The sensor115, wherein one or more of the sensor elements115may be provided, may be located inside the housing158of the detector110. Further, the at least one transfer device128is comprised, such as one or more optical systems, preferably comprising one or more lenses. An opening164inside the housing158, which, preferably, is located concentrically with regard to the optical axis126of the detector110, preferably defines a direction of view166of the detector110. In the coordinate system168, symbolically depicted inFIG.9, a longitudinal direction is denoted by z, and transversal directions are denoted by x and y, respectively. Other types of coordinate systems168are feasible, such as non-Cartesian coordinate systems. The detector110may comprise the sensor element115as well as, optionally, one or more further optical sensors. A non-branched beam path may be used or, alternatively, a branched beam path may be possible, with, e.g., additional optical sensors in one or more additional beam paths, such as by branching off a beam path for at least one transversal detector or transversal sensor for determining the transversal coordinates of the object112and/or parts thereof. As outlined above, however, in the context ofFIG.8, the at least one transversal coordinate may also be determined by the sensor element115itself, such as by determining the transversal coordinates of the center of the light spot131. One or more light beams116are propagating from the object112and/or from one or more of the beacon devices114, towards the detector110. The detector110is configured for determining a position of the at least one object112. For this purpose, as outlined above in the context ofFIGS.6to8, the evaluation device132may be configured to evaluate the sensor signals provided by the optical sensors113of the matrix117of the sensor element115. The detector110is adapted to determine a position of the object112, and the optical sensors113are adapted to detect the light beam116propagating from the object112towards the detector110, specifically from one or more of the beacon devices114. In case no illumination source136is used, the beacon devices114and/or at least one of these beacon devices114may comprise active beacon devices with an integrated illumination source such as light-emitting diodes. In case the illumination source136is used, the beacon devices do not necessarily have to be active beacon devices. Contrarily, a reflective surface of the object112may be used, such as integrated reflective beacon devices114having at least one reflective surface. The light beam116, directly and/or after being modified by the transfer device128, such as being focused by one or more lenses, illuminates the sensor element118. For details of the evaluation, reference may be made toFIGS.6to8above. As outlined above, the determination of the position of the object112and/or a part thereof by using the detector110may be used for providing a human-machine interface148, in order to provide at least one item of information to a machine170. In the embodiments schematically depicted inFIG.9, the machine170may be a computer and/or may comprise a computer. Other embodiments are feasible. The evaluation device132may even be fully or partially integrated into the machine170, such as into the computer. As outlined above,FIG.9also depicts an example of a tracking system152, configured for tracking the position of the at least one object112and/or of parts thereof. The tracking system152comprises the detector110and at least one track controller172. The track controller172may be adapted to track a series of positions of the object112at specific points in time. The track controller172may be an independent device and/or may fully or partially be integrated into the machine170, specifically the computer, as indicated inFIG.9, and/or into the evaluation device132. Similarly, as outlined above, the human-machine interface148may form part of an entertainment device150. The machine170, specifically the computer, may also form part of the entertainment device150. Thus, by means of the user162functioning as the object112and/or by means of the user162handling a control device160functioning as the object112, the user162may input at least one item of information, such as at least one control command, into the computer, thereby varying the entertainment functions, such as controlling the course of a computer game. InFIG.10, experimental data are shown which demonstrate an exemplary embodiment of the present invention in terms of measurement data. The figure shows a quotient signal Q as a function of a longitudinal coordinate z, given in millimeters, for various illumination intensities. In order to gain the experimental data shown inFIG.10, an experimental setup was used with a sensor element118formed by a Basler AC 1920-40GC camera, with a transfer device128formed by a Nikkor 50 mm lens. As a beacon device114, a light-emitting diode (LED) was used, having a central nominal wavelength of 532 nm. A diffusor made from Teflon film was used in front of the LED and a diaphragm, in order to provide a well-defined light-emitting area having a diameter of 5 mm. The intensity of the LED was varied, by varying a drive current of the LED between 5 mA and 150 mA. In the experiments, the distance z between the LED and the lens was varied from 300 mm to 1700 mm. The signal of the Basler AC 1920-40GC camera was evaluated by the following procedure. As a center signal, an accumulated signal of an inner circle having a radius of 15 pixels around the optical axis was determined, with the light spot centered at the optical axis. As a sum signal, the sum of the signals of all pixels of the camera within the light spot was generated. A quotient signal was formed, by dividing the sum signal by the center signal. InFIG.10, the solid curves, overlapping, show the quotient signal curves for LED currents of 150 mA, 125 mA, 100 mA and 50 mA. As can be seen, there are basically no differences between these curves, within the whole measurement range. This experiment clearly shows that the quotient signal is widely independent on the total power of the light beam. Only at lower intensities, the curves may be distinguished. Thus, the dotted curve shows measurements at an LED current of 25 mA, the dashed-dotted line at an LED current of 10 mA, and the dashed line at an LED current of 5 mA. Still, even at these low intensities, the measurement curves are very close to the solid curves, which shows the high tolerances of the measurement. InFIG.11, a schematic view of a further embodiment of a detector110for determining a position of at least one object112is depicted. In this case, the object112comprises the beacon device114, from which the light beam116propagates towards the first optical sensor118and the second optical sensor120. The first optical sensor118may comprise a first light-sensitive area122, and the second optical sensor120comprises the second light-sensitive area124. The optical sensors118,120, as shown e.g. inFIG.14, may be part of an array174of optical sensors176, such as the first optical sensor118being the optical sensor176in the upper left corner of the array174and the second optical sensor120being the optical sensor176in the lower right corner of the array174. Other choices are feasible. The array174, as an example, may be a quadrant photodiode178, and the optical sensors176may be partial diodes of the quadrant photodiode178. The light beam116, as an example, may propagate along an optical axis126of the detector110. Other embodiments, however, are feasible. The optical detector110comprises the at least one transfer device128, such as at least one lens and/or at least one lens system, specifically for beam shaping. Consequently, the light beam116may be focused, such as in one or more focal points130, and a beam width of the light beam116may depend on the longitudinal coordinate z of the object112, such as on the distance between the detector110and the beacon device114and/or the object112. The optical sensors118,120are positioned off focus. In this third preferred embodiment the optical sensors118,120may be arranged such that the light-sensitive areas of the optical sensors differ in their spatial offset and/or their surface areas. For details of this beam width dependency on the longitudinal coordinate, reference may be made to one or more of WO 2012/110924 A1 and/or WO 2014/097181 A1. As can be seen inFIG.14, the setup of the detector110is off-centered in various ways. Thus, a geometrical center180of the array174may be off-centered from the optical axis126,129by offset d0. Further, a geometrical center182of the first optical sensor118is off-centered from the optical axis126by offset d1, and a geometrical center184of the second optical sensor120is off-centered from the optical axis126by offset d2, wherein d1≠d2. In other words, a light spot186is formed, which is unequally distributed over the light-sensitive areas122,124. As will be shown in further detail below, the detector110may be configured for automatically establishing the off-centered position of the light spot186on the array174. For this purpose, firstly, the detector110may be configured for determining whether the sensor signals generated by the optical sensors176of the array174are equal. If this should be the case, the detector110may be configured to determine that the light spot186is centered in the array174and, consequently, may shift the light spot186out of the geometrical center180of the array174, such as by shifting the whole array174in a plane perpendicular to the optical axis126,129. For this purpose, as will be shown in further detail below with respect toFIG.13, one or more actuators may be provided in the detector110. Turning back to the setup ofFIG.11, the first optical sensor118, in response to the illumination by the light beam116, generates a first sensor signal s1, whereas the second optical sensor120generates a second sensor signal s2. Preferably, the optical sensors118,120are linear optical sensors, i.e. the sensor signals s1and s2each are solely dependent on the total power of the light beam116or of the portion of the light beam116illuminating their respective light-sensitive areas122,124, whereas these sensor signals s1and s2are independent from the actual size of the light spot of illumination. In other words, preferably, the optical sensors118,120do not exhibit the above-described FiP effect. The sensor signals s1and s2are provided to an evaluation device132of the detector110. The evaluation device110, as symbolically shown inFIG.1, may specifically be embodied to derive a quotient signal Q, as explained above. The quotient signal Q, derived by dividing the sensor signals s1and s2or multiples or linear combinations thereof, may be used for deriving at least one item of information on a longitudinal coordinate z of the object112and/or the beacon device114, from which the light beam116propagates towards the detector110, as will be explained in further detail with reference to the corrosion signals shown inFIGS.16to18below. The detector110, in combination with the at least one beacon device114, may be referred to as the detector system134, as will be explained in further detail below with reference toFIG.13. InFIG.12, a modification of the embodiment ofFIG.11is shown, which forms an alternative detector110. The alternative embodiment of the detector110widely corresponds to the embodiment shown inFIG.11. Instead of using an active light source, i.e. a beacon device114with light-emitting properties for generating the light beam116, however, the detector110comprises the at least one illumination source136. The illumination source136, as an example, may comprise a laser, whereas, inFIG.11, as an example, the beacon device114may comprise a light-emitting diode (LED). The illumination source136may be configured for generating at least one illumination light beam138for illuminating the object112. The illumination light beam138is fully or partially reflected by the object112and travels back towards the detector110, thereby forming the light beam116. The illumination source136, as an example, may comprise one or more diaphragms190, such as an adjustable diaphragm190, e.g. an adjustable iris diaphragm and/or a pin hole. The setup shown inFIG.12, as an example, may also be used in or as a readout device192for optical storage media. Thus, as an example, the object112may be an optical storage medium such as in optical storage discs, e.g. a CCD, DVD or Blu-ray disc. By measuring the presence or non-presence of data storage modules and the depth of the same within the object112, by using the above-mentioned measurement principle, a data readout may take place. The light beam116, specifically, may travel along the optical axis126of the detector110. As shown inFIG.12, as an example, the illumination light beam138may be parallel to the optical axis126of the detector110. Other embodiments, i.e. off-axis illumination and/or illumination at an angle, are feasible, too, as will be shown in the context ofFIGS.19A and19Bbelow. In order to provide an on-axis illumination, as shown inFIG.12, as an example, one or more reflective elements140may be used, such as one or more prisms and/or mirrors, such as dichroitic mirrors, such as movable mirrors or movable prisms. Apart from these modifications, the setup of the embodiment inFIG.12corresponds to the setup inFIG.11. Thus, again, an evaluation device132may be used, having, e.g., at least one divider142for forming the quotient signal Q, and, as an example, at least one position evaluation device144, for deriving the at least one longitudinal coordinate z from the at least one quotient signal Q. It shall be noted that the evaluation device132may fully or partially be embodied in hardware and/or software. Thus, as an example, one or more of components142,144may be embodied by appropriate software components. It shall further be noted that the embodiments shown inFIGS.11and12simply provide embodiments for determining the longitudinal coordinate z of the object112. It is also feasible, however, to modify the setups ofFIGS.11and12to provide additional information on a transversal coordinate of the object112and/or of parts thereof. As an example, e.g. in between the transfer device128and the optical sensors118,120, one or more parts of the light beam116may be branched off, and may be guided to a position-sensitive device such as one or more CCD and/or CMOS pixelated sensors and/or additional quadrant detectors and/or other position sensitive devices, which, from a transversal position of a light spot generated thereon, may derive a transversal coordinate of the object112and/or of parts thereof. For further details, as an example, reference may be made to one or more of the above-mentioned prior art documents which provide for potential solutions of transversal sensors. FIG.13shows, in a highly schematic illustration, an exemplary embodiment of a detector110, e.g. according to the embodiments shown inFIG.11or12. The detector110specifically may be embodied as a camera146and/or may be part of a camera146. The camera146may be made for imaging, specifically for 3D imaging, and may be made for acquiring standstill images and/or image sequences such as digital video clips. Other embodiments are feasible. FIG.13further shows an embodiment of a detector system134, which, besides the at least one detector110, comprises one or more beacon devices114, which, in this example, may be attached and/or integrated into an object112, the position of which shall be detected by using the detector110.FIG.13further shows an exemplary embodiment of a human-machine interface148, which comprises the at least one detector system134and, further, an entertainment device150, which comprises the human-machine interface148. The figure further shows an embodiment of a tracking system152for tracking a position of the object112, which comprises the detector system134. The components of the devices and systems shall be explained in further detail below. FIG.13further shows an exemplary embodiment of a scanning system154for scanning a scenery comprising the object112, such as for scanning the object112and/or for determining at least one position of the at least one object112. The scanning system154comprises the at least one detector110, and, further, optionally, the at least one illumination source136as well as, optionally, at least one further illumination source136. The illumination source136, generally, is configured to emit at least one illumination light beam138, such as for illumination of at least one dot, e.g. a dot located on one or more of the positions of the beacon devices114and/or on a surface of the object112. The scanning system154may be designed to generate a profile of the scenery including the object112and/or a profile of the object112, and/or may be designed to generate at least one item of information about the distance between the at least one dot and the scanning system154, specifically the detector110, by using the at least one detector110. As outlined above, an exemplary embodiment of the detector110which may be used in the setup ofFIG.13is shown inFIGS.11and12or will be shown, as an alternative embodiment, inFIG.15below. Thus, the detector110, besides the optical sensors118,120, comprises at least one evaluation device132, having e.g. the at least one divider142and/or the at least one position evaluation device144, as symbolically depicted inFIG.13. The components of the evaluation device132may fully or partially be integrated into a distinct device and/or may fully or partially be integrated into other components of the detector110. Besides the possibility of fully or partially combining two or more components, one or more of the optical sensors118,120and one or more of the components of the evaluation device132may be interconnected by one or more connectors156and/or by one or more interfaces, as symbolically depicted inFIG.15. Further, the one or more connectors156may comprise one or more drivers and/or one or more devices for modifying or preprocessing sensor signals. Further, instead of using the at least one optional connector156, the evaluation device132may fully or partially be integrated into one or both of the optical sensors118,120and/or into a housing158of the detector110. Additionally or alternatively, the evaluation device132may fully or partially be designed as a separate device. InFIG.13, as an example, one or more reflective elements140may be used, for example partially transparent, such as one or more prisms and/or mirrors, such as dichroitic mirrors, such as movable mirrors or movable prisms. The detector110as symbolically shown in the exemplary embodiment ofFIG.13may also comprise at least one actuator188for moving the array174of the optical sensors176relative to the optical axis126. As outlined above, for providing this movement, the optical axis126may be moved in relation to the array174by moving the optical axis126, by moving the array174or both. Thus, as an example, the optical axis may be moved by using one or more of deflecting elements and/or by using the transfer device128. As a simple example, a lens of the transfer device128may be tilted, such as by using one or more actuators188(not depicted). Additionally or alternatively, the array174may be shifted by the one or more actuators188, preferably in a plane perpendicular to the optical axis126. As an example, one or more electromechanical actuators may be used, such as one electromechanical actuator for an x-direction and another electromechanical actuator for a y-direction. Other embodiments are feasible. Thereby, the above-mentioned off-centering procedure may be implemented for establishing an off-centered situation as shown e.g. inFIG.14. In the exemplary embodiment shown inFIG.13, further, the object112, the position of which may be detected, may be designed as an article of sports equipment and/or may form a control element or a control device160, the position of which may be manipulated by a user162. As an example, the object112may be or may comprise a bat, a racket, a club or any other article of sports equipment and/or fake sports equipment. Other types of objects112are possible. Further, the user162himself or herself may be considered as the object112, the position of which shall be detected. As outlined above, the detector110comprises at least the optical sensors176, including at least the first optical sensor118and the second optical sensor120. The optical sensors176may be located inside the housing158of the detector110. Further, the at least one transfer device128is comprised, such as one or more optical systems, preferably comprising one or more lenses. An opening164inside the housing158, which, preferably, is located concentrically with regard to the optical axis126of the detector110, preferably defines a direction of view166of the detector110. In the coordinate system128, symbolically depicted inFIG.15, a longitudinal direction is denoted by z, and transversal directions are denoted by x and y, respectively. Other types of coordinate systems168are feasible, such as non-Cartesian coordinate systems. The detector110may comprise the optical sensors118,120as well as, optionally, further optical sensors. The optical sensors118,120preferably are located in one and the same beam path, one behind the other, such that the first optical sensor118covers a portion of the second optical sensor120. Alternatively, however, a branched beam path may be possible, with additional optical sensors in one or more additional beam paths, such as by branching off a beam path for at least one transversal detector or transversal sensor for determining transversal coordinates of the object112and/or of parts thereof. One or more light beams116are propagating from the object112and/or from one or more of the beacon devices114, towards the detector110. The detector110is configured for determining a position of the at least one object112. For this purpose, as explained above in the context ofFIGS.11,12and14, the evaluation device132is configured to evaluate sensor signals provided by the optical sensors118,120. The detector110is adapted to determine a position of the object112, and the optical sensors118,120are adapted to detect the light beam116propagating from the object112towards the detector110, specifically from one or more of the beacon devices114. In case no illumination source136is used, the beacon devices114and/or at least one of these beacon devices114may be or may comprise active beacon devices with an integrated illumination source such as a light-emitting diode. In case the illumination source136is used, the beacon devices114do not necessarily have to be active beacon devices. Contrarily, a reflective surface of the object112may be used, such as integrated reflective beacon devices114having at least one reflective surface. The light beam116, directly and/or after being modified by the transfer device128, such as being focused by one or more lenses, illuminates the light-sensitive areas122,124of the optical sensors118,120. For details of the evaluation, reference may be made toFIGS.11,12and14above. As outlined above, the determination of the position of the object112and/or a part thereof by using the detector110may be used for providing a human-machine interface148, in order to provide at least one item of information to a machine170. In the embodiments schematically depicted inFIG.13, the machine170may be a computer and/or may comprise a computer. Other embodiments are feasible. The evaluation device132may even be fully or partially integrated into the machine170, such as into the computer. As outlined above,FIG.13also depicts an example of a tracking system152, configured for tracking the position of the at least one object112and/or of parts thereof. The tracking system152comprises the detector110and at least one track controller172. The track controller172may be adapted to track a series of positions of the object112at specific points in time. The track controller172may be an independent device and/or may be fully or partially integrated into the machine170, specifically the computer, as indicated inFIG.13and/or into the evaluation device132. Similarly, as outlined above, the human-machine interface148may form part of an entertainment device150. The machine170, specifically the computer, may also form part of the entertainment device150. Thus, by means of the user162functioning as the object112and/or by means of the user162handling a control device160functioning as the object112, the user162may input at least one item of information, such as at least one control command, into the computer, thereby varying the entertainment functions, such as controlling the course of a computer game. In the setup of the detectors110as shown inFIGS.11,12and13, the optical sensors176are part of an array174, and all optical sensors176may be located in one and the same plane oriented essentially perpendicular to the optical axis126. As noted in this context, when mentioning “perpendicular” or “essentially perpendicular”, preferably, a 90° orientation is given. However, tolerances may be present, such as angular tolerances of no more than 20°, preferably of no more than 10° or more preferably of no more than 5°. The optical sensors176, however, not necessarily have to be located in an array174and not necessarily have to be located in one and the same plane, as is shown in an alternative setup of the detector110shown inFIG.15. In this figure, only the optical components are shown. For other components, reference may be made toFIGS.11,12and13above. As can be seen, in this alternative setup, two or more optical sensors176are present, comprising at least one first optical sensor118and at least one second optical sensor120located in different planes which are offset in a direction of the optical axis126, also referred to as the z-direction. Thus, further, as can also be seen, the optical sensors118,120may overlap, whereas in the previous embodiments, preferably, no overlap between the optical sensors176is given. Apart from these modifications, the functionality and the evaluation of the sensor signals generally corresponds to the embodiment ofFIGS.11,12and13above. As discussed above, for evaluating the at least two sensor signals of the at least two optical sensors176and for deriving an information on the longitudinal position of the object112thereof, such as a distance between the detector110and the object112and/or a z-coordinate of the object112, preferably, at least one combined sensor signal is generated by the evaluation device132. The combined sensor signal, as long as this combined sensor signal provides, at least over a measurement range, a unique function of the distance, may be used for deriving the longitudinal coordinate. As an example, the combined sensor signal may be or may comprise at least one quotient signal Q. InFIGS.16to18, quotient signals Q of two sensor signals of two optical sensors176are shown under various measurement conditions. In each case, the quotient signal Q is denoted on the vertical axis, as a function of the longitudinal coordinate z of the object112on the horizontal axis, the latter given in centimeters. In all experiments, a setup as shown inFIG.12was used. As an illumination source136, in the experiments ofFIGS.16and17, a 980 nm Picotronic laser source was used, in conjunction with a lens having a focal length of 100 mm. In the experiment ofFIG.18, a Laser Components laser light source having a wavelength of 850 nm was used, in conjunction with a lens having a focal length of 79 mm. In all experiments, the laser beam was aligned on the optical axis126via a small prism in front of the lens128, forming a reflective element140. A diaphragm190in front of the laser source was used to vary the spot size. The quadrant diode178was used to measure the reflection of the laser source on different materials. In all experiments, the distance dependency is given by the quotient Q of two adjacent quadrant currents. InFIG.16, the laser power was varied during the experiment, from 8 nA laser current, denoted by the dotted line, to 106 nA, denoted by the solid line. Therein, since the laser current typically does not provide a measure for the laser intensity, the laser current indicated therein is a current of a silicon photodetector in a measurement setup in which the laser illuminates a white sheet of paper at a distance of 330 mm from the lens. As is clearly visible, the curves are nearly identical and, at least within this range of variation of the laser power, do not significantly depend on the laser power. This experiment shows that the quotient signal provides a reliable and monotonous function of the longitudinal coordinate, independent from the influence of the brightness of the illumination source. InFIG.17, a spot size of the illumination source136was varied, by varying the open diameter of the diaphragm190in front of the laser. The spot size was varied from 1.5 mm, denoted by the dotted line, to 3.5 mm, denoted by the solid line, in steps of 0.5 mm. As can be seen, up to a distance of approximately 200 cm, the quotient signal Q does not depend on the spot size and, thus, again, is not negatively affected by this variation. InFIG.18, a material of the object112illuminated by the laser beam was varied. Therein, the dotted line denotes white paper, the dashed line with the smallest dashes denotes black paper, the dashed line with the medium dashes denotes wood, and the dashed line with the largest dashes denotes an aluminum plate. As can be seen, at least up to a measurement range of approximately 250 cm, the experiment does not strongly depend on the type of material used for the object112. The experiments shown inFIGS.16to18, thus, clearly demonstrate that the quotient signal Q provides a reliable function of the distance. At least within a range of measurement, the function monotonously rises with the distance. The function is not strongly influenced by the most significant variations which may occur in real life measurements, such as the brightness of the illumination source, the spot size of the illumination source or the material of the object112. Thus, by evaluating the quotient signal Q of two or more optical sensors176, reliable distance information may be generated. Thus, as an example, the curves shown inFIGS.16to18directly may be used as calibration curves for the purpose of the evaluation device132. Other evaluation methods, however, are feasible. InFIGS.19A and19B, an alternative embodiment of the detector110is shown which is a modification of the setup shown inFIG.2. Thus, for most elements and optional details as well as further elements not shown in the schematicFIGS.19A and19B, reference may be made to the description ofFIG.12above. InFIG.12, the illumination light beam138, as discussed above, preferably travels along the optical axis126, i.e. parallel to the optical axis126or even on the optical axis126. In the setup, the position of the center of the light spot186typically does not depend on the z-coordinate of the object112, such as on a distance between the object112and the detector110. In other words, the diameter or equivalent diameter of the light spot186changes with the distance between the object112and the detector110whereas, typically, the position of the light spot186on the array174does not. Contrarily, inFIGS.19A and19B, a setup of the detector110is shown in which an illumination light beam138travels off-axis, i.e. one or both of at an angle other than 0° with the optical axis126or parallel to the optical axis126but shifted from the optical axis126. This embodiment, as will be discussed in further detail below, demonstrates that the method according to the present invention can be further enhanced by increasing the z-dependency of a combined sensor signal. Thus, inFIG.19A, a side view is shown with two different positions of the object112, i.e. a first position at z1, drawn in solid lines, and a second position at z2, drawn in dashed lines. As can be seen, the illumination light beam138which, as an example, propagates at an angle of 5° to 30°, e.g. 10° to 20°, with the optical axis126, hits the object112in both cases at different positions. From these points of the object112illuminated by the illumination light beam138, light beams116propagate towards the detector110, wherein, again, the light beam116for the object112being located at position z1is drawn in solid lines, wherein the light beam116for the object112being located at position z2is drawn in dashed lines. InFIG.19B, the array174, e.g. a quadrant photodiode, is shown in an enlarged fashion. As can be seen in this setup, the position of the light spot186moves with the longitudinal position z of the object112. Thus, not only is the size of the light spot186affected by the longitudinal position z but also is the position on the array174of the light spot186changed. InFIG.19B, this movement of the light spot186is denoted by arrow z. Consequently, by this movement of the light spot186, the z-dependency of a combined sensor signal taking into account at least two sensor signals of the optical sensors176may be increased. As an example, the four diodes of the array174, inFIG.19B, are denoted by D1-D4. The quotient signal Q, as an example, may be formed as Q=i(D1)/i(D4), with i(D1) being the sensor signal of photodiode D1, and i(D4) being the sensor signal of photodiode D4. As shown inFIG.19B, the quadrant diode may comprise two dividing lines. The dividing lines may be arranged orthogonal to each other. The orthogonal arrangement of the dividing lines allows adjusting of the quotient signal for near field and far field applications independently from each other. In addition to determining the quotient signal of sensor signals of two optical sensors of the quadrant diode, the evaluation device132may be adapted to determine a second quotient using at least three or all four sensor signals of the quadrant diode. The two quotients can be formed such that two distinct distance ranges are covered. The two quotient signals for the near field and far field may have an overlap region in which both quotients allow obtaining reasonable determination of the longitudinal distance z. For example, the quotient may be determined by Q=i(D1+D2)/i(D3+D4), wherein the sensor signals of the two top quadrants, also called top segment, are divided by the sensor signals of the two bottom quadrants, also called bottom segment. Using the quotient of sensor signals determined by two sensor areas which have a dividing line parallel to the baseline of the detector may allow determining of the quotient without any distance dependent movement of the light spot. In particular, as an example, if the dividing line between top and bottom segment is parallel to the baseline, the quotient signal determined from the top segment divided by the bottom segment may be used in the near field, wherein the light spot may illuminate only one of a left or right segment of the quadrant diode. In this case determining the quotient signal by dividing sensor signals of the left and right segments may not be possible. However, determining the quotient by dividing the sensor signals of top and bottom segments may provide a reasonable distance measurement. The quotient signal determined by dividing sensor signals of the left and right segments, i.e. Q=i(D1+D3)/i(D2+D4), may be used for far field measurement, wherein the light spot illuminates both left and right segments. Furthermore, the evaluation device may be adapted to determine the quotient by dividing sensor signals of opposing segments or neighboring segments. The evaluation device may be adapted to combine the acquired sensor signals i(D1), i(D2), i(D3) and i(D4) of the quadrants such that distance measurement is possible over a wide range with a large resolution. In the situation shown inFIG.12, the position of the light spot186does not depend on z. With a change in z, depending on the optical situation, the spot will become larger or smaller, such as by becoming more diffuse or more focused. In case the spot size increases and the spot becomes more diffuse, i(D4) will increase more rapidly than i(D1), such that the quotient signal Q decreases. Contrarily, in the situation ofFIG.19A, both the size and the position of the light spot186are dependent on the z-coordinate. Thus, the tendency of the z-dependency of the combined sensor signal such as the quotient signal Q will be increased. In the situation ofFIG.12, depending on the z-coordinate, the sensor signal of at least one sensor will increase and simultaneously the sensor signal of at least one different sensor will decrease, resulting in the z-dependent quotient signal Q. In the situation ofFIG.19A, the position dependency of the light spot186can result in three different situations depending on the relative position of light source, optical axis, and sensor: Firstly, the position dependency of the light spot186may result in a further decrease of the at least one decreasing sensor signal depending on the z-coordinate, while, simultaneously, the position dependency of the light spot186may result in a further increase of the at least one decreasing sensor signal depending on the z-coordinate compared to the situation inFIG.12. Secondly, the position dependency of the light spot186may result in a reduced decrease or even increase of the at least one decreasing sensor signal depending on the z-coordinate, while, simultaneously, the position dependency of the light spot186may result in a reduced increase or even decrease of the at least one decreasing sensor signal depending on the z-coordinate compared to the situation inFIG.12. Thirdly, the position dependency of the light spot186may be as such that the z-dependence of the sensor signals is largely unchanged compared to the situation inFIG.12. However, according to the present invention, object distance is not determined from the position of the light spot186on a sensor as done in triangulation methods. Instead, movement of the light spot186on the array174may be used to enhance dynamic of the sensor signals and or the resulting quotient signal Q which may result in an enhanced dynamic of the z-dependency. Furthermore, movement of the light spot186on the array174during measurement may be used to establish and/or to enhance object size independence for the whole measurement range by suitable relative positioning of the optical sensor176and the illumination source136. Thus, movement of the light spot186may not be used for the purpose of triangulation but for the purpose of object size independence. Additionally, as known from the prior art, the sensor signals i(D1), i(D2), i(D3), i(D4) may also be used for determining a transversal position x, y of the object112. Further, the sensor signals may also be used for verifying the z-coordinate determined by the present invention. FIG.19Cshows a comparison of two experimental setups using a detector setup according toFIG.19Awith a Bi-cell as optical sensors176with two light sensitive areas. In a first experimental setup, depending on the relative position of the illumination light source, the optical axis and the sensor, the light spot186may move in parallel to the linear boundary of the two optical sensors176of the Bi-cell along a direction of movement210in dependence of the object distance. Since the direction of movement210of the light spot186is in parallel to the linear boundary of the two light sensitive areas in dependence of the object distance, the resulting sensor signals are identical to a situation with no movement of the light spot186depending on object distance as shown inFIG.12. In a second experimental setup, depending on the relative position of the illumination light source, the optical axis and the sensor, the light spot186may move as such that the distance of the center of the light spot186to the boundary of the two optical sensors176of the Bi-cell changes in dependence of the object distance such as a movement orthogonal to the boundary of the two optical sensors176such as a movement along a direction of movement208in dependence of the object distance. The detector setup allowing movement of the light spot186may be a modification of the setup shown inFIG.19A. Thus, for most elements and optional details as well as further elements, reference may be made to the description ofFIG.19Aabove. InFIG.19C, the optical sensors176may be a bi-cell diode. FIG.19Dshows experimental results of the comparison of the two experimental setups using a detector setup according toFIG.19A, allowing movement of the light spot186according toFIG.19Cwith movement of the light spot depending on the object distance along directions of movement210and208. Curve212shows the dependency of quotient Q on the longitudinal coordinate z for the detector setup allowing movement of the light spot186along a direction of movement210as shown inFIG.19Cwhich is in parallel to the boundary of the optical sensors of the Bi-Cell and, which is a situation equivalent toFIG.12without a movement of the light spot depending on the object distance. Curve214shows the dependency of quotient Q on the longitudinal coordinate z for the detector setup according toFIG.19Aand using a detector setup allowing movement of the light spot186with movement of the light spot186according toFIG.19Cwith movement of the light spot depending on the object distance along a direction of movement208. The experimental setup was as follows: The optical sensors176may be a bi-cell diode, in particular a Si—Bi-Cell. The illumination source136may be a 950 nm laser with a spot size of 4 mm. The transfer device128may have a focal length of 20 mm, e.g. a lens available as Thorlabs Asphere, f=20 mm. The distance of the object112was varied from 0 to 3000 mm. Determination of the longitudinal coordinate z may be possible without allowing movement of the light spot186. In particular, according to the present invention, movement of the light spot may not be essential for determination of the longitudinal coordinate z. With the detector setup allowing movement of the light spot186along a direction210or without any movement determination of object distance is possible at very small distance, whereas with movement along a direction208determination of object distance is possible for object distance such as distances greater than 500 mm. FIG.19Eshows object independence of the two experimental setups using a detector setup according toFIG.19A, allowing movement of the light spot186according toFIG.19Cwith movement of the light spot depending on the object distance along directions of movement208and210. In addition, for both experimental setups, the object size was varied from 1 mm (dashed line), 2 mm (dotted line), 6 mm (solid line) and 12 mm (loosely dotted line) by varying the aperture of the laser illumination source. Set of curves216shows dependency of quotient Q on the longitudinal coordinate z for the experimental setup allowing movement of the light spot186along a direction208. Set of curves218shows dependency of quotient Q on the longitudinal coordinate z for the experimental setup allowing movement of the light spot186along a direction210or without any movement. Set of curves216show only small deviations, in particular less than 5%, whereas set of curves218show larger deviations, in particular with increasing distance z. Thus, movement of the light spot186on the array174during measurement may be used to establish and/or to enhance object size independence for the whole measurement range by suitable relative positioning of the optical sensor176and the illumination source136. InFIG.20, a schematic view of a further embodiment of a detector1110for determining a position of at least one object1112is depicted. In this case, the object1112may comprise a beacon device1114, from which a light beam1116propagates towards a first optical sensor1118and a second optical sensor1120. The first optical sensor1118comprises a first light-sensitive area1122, and the second optical sensor1120comprises a second light-sensitive area1124. Details of the second optical sensor1120and the second light-sensitive area124will be explained in further detail below, with reference toFIGS.22A,22B and23. It shall be noted therein, that, in the embodiment shown inFIG.20, the first optical sensor1118is positioned in front of the second optical sensor1120, such that the light beam1116reaches the first optical sensor1118before the second optical sensor1120. As discussed above, however, another order is feasible. Thus, as an example, the second optical sensor1120may be positioned in front of the first optical sensor1118. The latter option, which is not depicted herein, is specifically possible in case the second light-sensitive area1124is fully or partially transparent, such as by providing a transparent fluorescent waveguiding sheet1174, as will be outlined in further detail below. The light beam1116, as an example, may propagate along an optical axis1126of the detector1110. Other embodiments, however, are feasible. The detector1110, further, may comprise at least one transfer device1128, such as at least one lens or a lens system, specifically for beam shaping. Consequently, the light beam1116may be focused, such as in one or more focal points1130, and a beam width of the light beam1116may depend on a longitudinal coordinate z of the object1112, such as on a distance between the detector1110and the beacon device1114and/or the object1112. For details of this beam width dependency on the longitudinal coordinate, reference may be made to one or more of WO 2012/110924 A1 and/or WO 2014/097181 A1. As can be seen inFIG.20, the first optical sensor1118is a small optical sensor, whereas the second optical sensor1120is a large optical sensor. Thus, the width of the light beam1116fully may cover the first light-sensitive area1122, whereas, on the second light-sensitive area1124, a light spot is generated which is smaller than the light-sensitive area1124, such that the light spot is fully located within the second light-sensitive area1124. Possible embodiments will be explained below with reference toFIG.23. Thus, as an example, the first light-sensitive area1122may have a surface area of 10 mm2to 100 mm2, whereas the second light-sensitive area1124may have a surface area of more than 100 mm2, such as 200 mm2or more, e.g. 200 to 600 mm2or 500 mm2or more. Other embodiments, however, are feasible. The first optical sensor1118, in response to the illumination by the light beam1116, may generate a first sensor signal s1, and the second optical sensor1120may generate at least one second sensor signal s2. As an example, the first optical sensor1118may be a linear optical sensor, i.e. the sensor signal s1is dependent on the total power of the light beam1116or on the portion of the light beam1116illuminating the first light-sensitive area1122, whereas the sensor signal s1is independent from the actual size of the light spot of illumination. In other words, the first optical sensor1118, preferably, does not exhibit the above-described FiP effect. The sensor signals s1and s2may be provided to an evaluation device1132of the detector1110. The evaluation device1110, as symbolically depicted inFIG.20, may specifically be embodied to derive a quotient signal Q, as explained above. The quotient signal Q, derived by dividing e.g. the sensor signals s1and s2or multiples or linear combinations thereof, may be used for deriving at least one item of information on a longitudinal coordinate z of the object1112and/or the beacon device1114, from which the light beam1116propagates towards the detector1110. Thus, as an example, a unique evaluation curve may exist, in which, for each quotient signal Q, a longitudinal coordinate z is assigned. The detector1110, in combination with the at least one beacon device1114, may be referred to as a detector system1134, as will be explained in further detail below, with reference toFIG.25. InFIG.21, a modification of the embodiment ofFIG.20is shown, which forms an alternative detector1110. The alternative embodiment of the detector1110widely corresponds to the embodiment shown inFIG.20. Instead of using an active light source, i.e. a beacon device1114with light-emitting properties for generating the light beam1116, however, the detector1110may comprise at least one illumination source1136. The illumination source1136, as an example, may comprise a laser, whereas, inFIG.20, as an example, the beacon device1114may comprise a light-emitting diode (LED). The illumination source1136may be configured for generating at least one illumination light beam1138for illuminating the object1112. The illumination light beam1138may fully or partially be reflected by the object1112and may travel back towards the detector1110, thereby forming the light beam1116. As shown inFIG.20, as an example, the illumination light beam1138may be parallel to the optical axis1126of the detector1110. Other embodiments, i.e. off-axis illumination and/or illumination at an angle, are feasible, too. In order to provide an on-axis illumination, as shown inFIG.21, as an example, one or more reflective elements1140may be used, such as one or more prisms and/or mirrors, such as dichroitic mirrors, such as movable mirrors or movable prisms. Apart from these modifications, the setup of the embodiment inFIG.21corresponds to the setup inFIG.20. Thus, again, an evaluation device1132may be used, having, e.g., at least one divider1142for forming the quotient signal Q, and, as an example, at least one position evaluation device1144, for deriving the at least one longitudinal coordinate z from the quotient signal Q. It shall be noted that the evaluation device1132may fully or partially be embodied in hardware and/or software. Thus, as an example, one or more of components1142,1144may be embodied by appropriate software components. It shall be further noted that the embodiments shown inFIGS.20and21simply provide embodiments for determining the longitudinal coordinate of the object1112. As will be outlined in further detail below with reference toFIGS.22A and22Bas well as toFIG.23, the detector1110may also be used for providing additional information on at least one transversal coordinate of the object1112and/or of parts thereof. InFIGS.22A and22B, a top view (FIG.22A) and a cross-sectional view of the second optical sensor1120, which may be used in the setups e.g. ofFIGS.20and/or21, is shown. The second optical sensor1120may comprise a fluorescent waveguiding sheet1174which forms the second light-sensitive area1124facing towards the object1112. The fluorescent waveguiding sheet1174, in this exemplary embodiment, may be designed as a flat waveguiding sheet, in which, as symbolically depicted by the arrow1176inFIG.22B, waveguiding by internal reflection may take place, specifically by internal total reflection, specifically a waveguiding of fluorescence light generated within the fluorescent waveguiding sheet1174. The fluorescent waveguiding sheet1174, as an example, may have a lateral extension of at least 25 mm2, such as at least 100 mm2, more preferably of at least 400 mm2. As an example, a 10 mm×10 mm square sheet, a 20 mm×20 mm square sheet, a 50 mm×50 mm square sheet or another dimension may be used. It shall be noted, however, that non-square geometries or even non-rectangular geometries may be used, such as circular or oval geometries or polygonal geometries. The fluorescent waveguiding sheet1174, as an example, may comprise a matrix material1178and at least one fluorescent material1180disposed therein, such as at least one fluorophore, e.g. a fluorescent dye. For exemplary embodiments, reference may be made to the above-mentioned materials, such as one or more of the materials listed in WO 2012/168395 A1. As an example, the following fluorescent material may be used: This fluorescent material is disclosed as substance 34.2 in WO 2012/168395 A1, including potential synthesis methods. The material may be immersed in polystyrene, such as at a concentration of 0.001-0.5 wt. %. The fluorescent material1180may be designed to generate fluorescence light in response to an illumination by the light beam1116. The fluorescent material1180and/or the concentration of the fluorescent material1180within the matrix material1178, specifically may be chosen to show linear properties, at least within a range of measurement, i.e. within a range of intensities, such that the total power of the fluorescence light generated in response to an excitation is a linear function of the intensity of the illumination by the excitation light, i.e. by the light beam1116. As an example, the materials and/or intensities may be chosen such that saturation effects are avoided. The second optical sensor1120further, in this embodiment, may comprise a plurality of photosensitive elements1182,1184,1186,1188, inFIGS.22A and22Breferred to as PD1-PD4, located at respective edges190,192,194,196of the fluorescent waveguiding sheet174, e.g. rim portions of the fluorescent waveguiding sheet1174. In this exemplary embodiment, the fluorescent waveguiding sheet1174may have a rectangular shape, such that pairs of edges are opposing each other, such as the pair of edges1190,1192and the pair of edges1194,1196. The sides of the rectangular shape of the fluorescent waveguiding sheet174may define a Cartesian coordinate system, with an x-dimension defined by an interconnection between edges1190and192, and a y-dimension defined by an interconnection between edges1196,1194, as indicated inFIG.22A. It shall be noted, however, that other coordinate systems are feasible. The photosensitive elements1182,1184,1186,1188, as an example, may comprise photodiodes. Specifically, these photosensitive elements1182,1184,1186,1188may have, each, a comparable, preferably an identical, electrical capacity as the first optical sensor1118. It shall be noted, however, that other embodiments are feasible. The photosensitive elements1182,1184,1186,1188, as an example, may be or may comprise strip-shaped photodiodes covering, preferably, the full length of the respective edges1190,1192,1194,1196, or, preferably, covering at least 50% or more preferably at least 70% of the length of these respective edges1190,1192,1194,1196. Other embodiments, however, are feasible, such as embodiments in which more than one photosensitive element is located at a respective edge. The photosensitive elements1182,1184,1186,1188each produce at least one sensor signal, in response to the light, specifically the fluorescence light, detected by these photosensitive elements1182,1184,1186,1188. All of these sensor signals are referred to as second sensor signals, wherein, in the following, PD1creates sensor signal s2,1, PD2creates sensor signal s2,2, PD3creates sensor signal s2,3, and PD4creates sensor signal s2,1, with the first index 2 denoting the fact that these sensor signals are second sensor signals, and with the second index, from 1 to 4, indicating the respective photosensitive element1182,1184,1186,1188from which the respective sensor signal originates. As outlined above inFIGS.20and21, the at least one first sensor signal s1and the second sensor signals s2, j(with j=1, . . . , 4) are provided to the evaluation device1132of the detector1110, the function of which will be explained in further detail below, specifically with reference toFIG.24. The evaluation device1132is configured to determine at least one longitudinal coordinate z of the object1112, which is not depicted in these figures, and from which the light beam1116propagates towards the detector1110, by evaluating the first and second sensor signals. Additionally as will be outlined in further detail below, at least one transversal coordinate x and/or y may be determined, as will also be outlined in further detail below, with reference toFIGS.3and24. The second optical sensor1120, as depicted inFIG.22B, may further optionally comprise at least one optical filter element1198. The optical filter element1198may be placed in front of an optional reference photosensitive element1200, which may further, with or without the optical filter element1198, be present in the detector1110. As an example, the reference photosensitive element1200may comprise a large area photodiode. Other setups, however, are feasible. Thus, it shall be noted, that the reference photosensitive element1200may also be left out in this embodiment, since the first optical sensor1118may also take over the functionality of the reference photosensitive element1200. Specifically, in case a transparent fluorescent waveguiding sheet1174is used and in case the first optical sensor1118is placed behind the second optical sensor1120, the first optical sensor118may also take over the functionality of the reference photosensitive element1200. It shall further be noted that one or both of the first optical sensor1118and the second optical sensor1120may be a uniform optical sensor, having a single light-sensitive area1122,1124, each, or that one or both of these optical sensors1118,1120may be pixelated. As an example, the at least one optical filter element1198may be designed to prevent fluorescence light from entering the reference photosensitive element1200or, at least, may attenuate fluorescence light by at least 70%, or, preferably, by at least 80%. InFIG.23, an illumination of the second light-sensitive area1124by the light beam1116is shown. Therein, two different situations are depicted, representing different distances between the object1112and from which the light beam1116propagates towards the detector1110, and the detector1110itself, resulting in two different spot sizes of light spots generated by the light beam in the fluorescent waveguiding sheet1174. Firstly, a small light spot1202and, secondly, a large light spot1204. In both cases, the overall power of the light beam remains the same over light spots1202,1204. Further, a shadow1206is depicted, which is generated by the first optical sensor118being placed in front of the second optical sensor1120. In the following, it is assumed that the first optical sensor1118is still fully illuminated by the light beam1116. The illumination by the light beam1116induces fluorescence which, as depicted inFIG.22Babove, is fully or partially transported by waveguiding towards the photosensitive elements1182,1184,1186,1188. As indicated above, corresponding second sensor signals are generated by these photosensitive elements, and are provided to the evaluation device1132, in conjunction with the first sensor signal and, optionally, further in conjunction with at least one reference sensor signal generated by the at least one reference photosensitive element1200. The evaluation device1132, as symbolically depicted inFIG.24, is designed to evaluate the sensor signals which, therein, are represented as outlined above. The sensor signals may be evaluated by the evaluation device in various ways, in order to determine a location information and/or a geometrical information of the object1112, such as at least one longitudinal coordinate z of the object1112and, optionally, one or more transversal coordinates of the object1112. Firstly, the evaluation device1132may comprise at least one summing device1208configured to form a sum signal S of the sensor signals PD1to PD4, such as according to formula (1) above, for the second sensor signals s2,i, with i=1, . . . , 4 (the first index, for the sake of simplicity, is left out in the above-mentioned formula (1)). This sum signal S may replace the second sensor signal s2in general and/or, for a part of the further evaluation, may be used as “the” second sensor signal of the second optical sensor1120. This sum signal S may represent the total power of the fluorescence light generated by the light beam1116. Even though, some losses may occur, since, generally, not all of the fluorescence light will actually reach the photosensitive elements1182,1184,1186,1188. Thus, as an example, losses in waveguiding may occur, or some of the fluorescence light may actually be emitted from the edges1190,1192,1194,1196, in a direction which is not covered by the photosensitive elements1182,1184,1186,188. Still, the sum signal S provides a fairly good measure for the total power of the fluorescence generated within the fluorescent waveguiding sheet1174. The evaluation device1132may comprise at least one divider1142which, as symbolically depicted inFIG.24, may be part of a position evaluation device1144and which may be configured for forming at least one quotient signal out of the first and second sensor signals s1, s2, with s2, as an example, being the sum signal S of the respective second sensor signals, as outlined above. Thus, as an example, the divider1142may be configured for one or more of dividing the first and second sensor signals, dividing multiples of the first and second sensor signals or dividing linear combinations of the first and second sensor signals. The position evaluation device1144further may be configured for determining the at least one longitudinal coordinate z by evaluating the quotient signal Q, such as by using at least one predetermined or determinable relationship between the quotient signal Q and the longitudinal coordinate. As an example, calibration curves may be used. The divider1142and/or the position evaluation device1144may, as an example, comprise at least one data processing device, such as at least one processor, at least one DSP, at least one FPGA and/or at least one ASIC. Further, for storing the at least one predetermined or determinable relationship between the longitudinal coordinate z and the quotient signal, at least one data storage device may be provided, such as for providing one or more look-up tables for storing the predetermined relationship. As outlined above, additional information may be derived from the second sensor signals s2,1, s2,2, s2,3and s2,4, besides the at least one longitudinal coordinate z of the object. Thus, additionally, at least one transversal coordinate x, y may be derived. This is mainly due to the fact that the distances between a center of the light spots1202,1204and the photosensitive elements1182,1184,1186,1188are non-equal. Thus, the center of the light spot1202,1204has a distance from the photosensitive element1182of I1, a distance from the photosensitive element1184of I2, from the photosensitive element1186of I3and from the photosensitive element1188of I4. Due to the differences in these distances between the location of the generation of the fluorescence light and the photosensitive elements detecting said fluorescence light, the sensor signals will differ. This is due to various effects. Firstly, again, internal losses will occur during waveguiding, since each internal total reflection implies a certain loss, such that the fluorescence light will be attenuated on its way, depending on the length of the path. The longer the distance of travel, the higher the attenuation and the higher the losses. Further, absorption effects will occur. Thirdly, a spreading of the light will have to be considered. The longer the distance between the light spot1202,1204to the respective photosensitive element1182,1184,1186,1188, the higher the probability that a photon will be directed into a direction other than the photosensitive element. Consequently, by comparing the sensor signals of the photosensitive elements1182,1184,1186,1188, at least one item of information on a transversal coordinate of the light spot1202,1204and, thus, of the object1112may be generated. The comparison of the sensor signals may take place in various ways. Thus, generally, the evaluation device1132may be designed to compare the sensor signals in order to derive the at least one transversal coordinate of the object1112and/or of the light spot1202,1204. As an example, the evaluation device1132may comprise at least one subtracting device1210and/or any other device which provides a function which is dependent on at least one transversal coordinate, such as on the coordinates x, y, of the object1112. For exemplary embodiments, the subtracting device1210and/or any other device may provide a function which is dependent on at least one transversal coordinate, such as on the coordinates x, y. For exemplary embodiments, the subtracting device1210may be designed to generate at least one difference signal, such as a signal according to formula (4) and/or (5) above, for one or each of dimensions x, y inFIG.23. As an example, a simple difference between PD1and PD2, such as PD1−PD2/(PD1+PD2) may be used, as a measure for the x-coordinate, and a difference between PD3and PD4, such as (PD3−PD4)/(PD3+PD4), may be used as measure for the y-coordinate. A transformation of the transversal coordinates of the light spot1202,1204in the plane of the second light-sensitive area1124, as an example, into transversal coordinates of the object from which the light beam1116propagates to the detector1110, may simply be made by using the well-known lens equation. For further details, as an example, reference may be made to WO 2014/097181 A1. It shall be noted, however, that other transformations or other algorithms for processing the sensor signals by evaluating device1140are feasible. Thus, besides subtractions or the linear combinations with positive or negative coefficients, non-linear transformations are generally feasible. As an example, for transforming the sensor signals into z-coordinates and/or x, y-coordinates, one or more known or determinable relationships may be used, which, as an example, may be derived empirically, such as by calibrating experiments with the object placed at various distances from the detector1110and/or by calibrating experiments with the object placed at various transversal positions or three-dimensional positions, and by recording the respective sensor signals. FIG.25shows, in a highly schematic illustration, an exemplary embodiment of a detector1110, e.g. according to the embodiments shown inFIG.20or21. The detector1110specifically may be embodied as a camera1146and/or may be part of a camera1146. The camera1146may be made for imaging, specifically for 3D imaging, and may be made for acquiring standstill images and/or image sequences such as digital video clips. Other embodiments are feasible. FIG.25further shows an embodiment of a detector system1134, which, besides the at least one detector1110, comprises one or more beacon devices1114, which, in this example, may be attached and/or integrated into an object1112, the position of which shall be detected by using the detector1110.FIG.25further shows an exemplary embodiment of a human-machine interface1148, which comprises the at least one detector system1134and, further, an entertainment device1150, which comprises the human-machine interface1148. The figure further shows an embodiment of a tracking system1152for tracking a position of the object1112, which comprises the detector system1134. The components of the devices and systems shall be explained in further detail below. FIG.25further shows an exemplary embodiment of a scanning system1154for scanning a scenery comprising the object1112, such as for scanning the object1112and/or for determining at least one position of the at least one object1112. The scanning system1154comprises the at least one detector1110, and, further, optionally, the at least one illumination source1136as well as, optionally, at least one further illumination source1136. The illumination source1136, generally, is configured to emit at least one illumination light beam1138, such as for illumination of at least one dot, e.g. a dot located on one or more of the positions of the beacon devices1114and/or on a surface of the object1112. The scanning system1154may be designed to generate a profile of the scenery including the object1112and/or a profile of the object1112, and/or may be designed to generate at least one item of information about the distance between the at least one dot and the scanning system1154, specifically the detector1110, by using the at least one detector1110. InFIG.13, as an example, one or more reflective elements1140may be used, for example partially transparent, such as one or more prisms and As outlined above, an exemplary embodiment of the detector1110which may be used in the setup ofFIG.25is shown inFIGS.20and21. Thus, the detector1110, besides the optical sensors1118,1120, comprises at least one evaluation device1132, having e.g. the at least one divider1142and/or the at least one position evaluation device1144, as symbolically depicted inFIG.25. The components of the evaluation device1132may fully or partially be integrated into a distinct device and/or may fully or partially be integrated into other components of the detector1110. Besides the possibility of fully or partially combining two or more components, one or more of the optical sensors1118,1120and one or more of the components of the evaluation device1132may be interconnected by one or more connectors1156and/or by one or more interfaces, as symbolically depicted inFIG.25. Further, the one or more connectors1156may comprise one or more drivers and/or one or more devices for modifying or preprocessing sensor signals. Further, instead of using the at least one optional connector1156, the evaluation device1132may fully or partially be integrated into one or both of the optical sensors1118,1120and/or into a housing1158of the detector1110. Additionally or alternatively, the evaluation device1132may fully or partially be designed as a separate device. In this exemplary embodiment, the object1112, the position of which may be detected, may be designed as an article of sports equipment and/or may form a control element or a control device1160, the position of which may be manipulated by a user1162. As an example, the object1112may be or may comprise a bat, a racket, a club or any other article of sports equipment and/or fake sports equipment. Other types of objects1112are possible. Further, the user1162himself or herself may be considered as the object1112, the position of which shall be detected. As outlined above, the detector1110comprises at least the optical sensors1118,1120. The optical sensors1118,1120may be located inside the housing1158of the detector1110. Further, the at least one transfer device1128may be comprised, such as one or more optical systems, preferably comprising one or more lenses. An opening1164inside the housing1158, which, preferably, is located concentrically with regard to the optical axis1126of the detector1110, preferably defines a direction of view1166of the detector1110. A coordinate system1168may be defined, in which a direction parallel or anti-parallel to the optical axis1126may be defined as a longitudinal direction, whereas directions perpendicular to the optical axis1126may be defined as transversal directions. In the coordinate system1128, symbolically depicted inFIG.25, a longitudinal direction is denoted by z, and transversal directions are denoted by x and y, respectively. Other types of coordinate systems1168are feasible, such as non-Cartesian coordinate systems. The detector1110may comprise the optical sensors1118,1120as well as, optionally, further optical sensors. The optical sensors1118,1120preferably are located in one and the same beam path, one behind the other, such that the first optical sensor1118covers a portion of the second optical sensor1120. Alternatively, however, a branched beam path may be possible, with additional optical sensors in one or more additional beam paths, such as by branching off a beam path for at least one transversal detector or transversal sensor for determining transversal coordinates of the object1112and/or of parts thereof. One or more light beams1116are propagating from the object1112and/or from one or more of the beacon devices1114, towards the detector1110. The detector1110is configured for determining a position of the at least one object1112. For this purpose, as explained above in the context ofFIGS.20to23, the evaluation device1132is configured to evaluate sensor signals provided by the optical sensors1118,1120. The detector1110is adapted to determine a position of the object1112, and the optical sensors1118,1120are adapted to detect the light beam1116propagating from the object1112towards the detector1110, specifically from one or more of the beacon devices1114. In case no illumination source1136is used, the beacon devices1114and/or at least one of these beacon devices1114may be or may comprise active beacon devices with an integrated illumination source such as a light-emitting diode. In case the illumination source1136is used, the beacon devices1114do not necessarily have to be active beacon devices. Contrarily, a reflective surface of the object1112may be used, such as integrated reflected beacon devices1114having at least one reflective surface. The light beam1116, directly and/or after being modified by the transfer device1128, such as being focused by one or more lenses, illuminates the light-sensitive areas1122,1124of the optical sensors1118,1120. For details of the evaluation, reference may be made toFIGS.20to23above. As outlined above, the determination of the position of the object1112and/or a part thereof by using the detector1110may be used for providing a human-machine interface1148, in order to provide at least one item of information to a machine1170. In the embodiments schematically depicted inFIG.25, the machine1170may be a computer and/or may comprise a computer. Other embodiments are feasible. The evaluation device1132may even be fully or partially integrated into the machine1170, such as into the computer. As outlined above,FIG.25also depicts an example of a tracking system1152, configured for tracking the position of the at least one object1112and/or of parts thereof. The tracking system1152comprises the detector1110and at least one track controller1172. The track controller1172may be adapted to track a series of positions of the object1112at specific points in time. The track controller1172may be an independent device and/or may be fully or partially integrated into the machine1170, specifically the computer, as indicated inFIG.25and/or into the evaluation device1132. Similarly, as outlined above, the human-machine interface1148may form part of an entertainment device1150. The machine1170, specifically the computer, may also form part of the entertainment device1150. Thus, by means of the user1162functioning as the object1112and/or by means of the user1162handling a control device1160functioning as the object1112, the user1162may input at least one item of information, such as at least one control command, into the computer, thereby varying the entertainment functions, such as controlling the course of a computer game. InFIGS.26A and26B, an alternative embodiment of the second optical sensor1120is shown, in a top view (FIG.26A) and in a cross-sectional view (FIG.26B). For most of the details of the second optical sensor1120, reference may be made toFIGS.22A and22Babove. The embodiment, however, shows various variations from the embodiment ofFIGS.22A and22B, which may be realized in an isolated fashion or in combination. Thus, firstly, the embodiment shows variations of the placement of the photosensitive elements. Besides the photosensitive elements1182,1184,1186,1188located at opposing edges1190,1192,1194,1196, which, in this embodiment, are straight edges, additional photosensitive elements1212are located at corners1214of the fluorescent waveguiding sheet1174. The edges1190,1192,1194,1196in combination may form a rim of the fluorescent waveguiding sheet1174, such as a rectangular rim. The rim itself may be roughened or even blackened in order to avoid back reflections from the rim. The corners1214also are part of the edges of the fluorescent waveguiding sheet1174. The photosensitive elements1212located at the corners1214may provide additional second sensor signals which may be evaluated in a similar fashion as shown e.g. inFIG.24. They may provide an increased accuracy of the determination of the z-coordinate and/or of the x, y-coordinate. Thus, as an example, these additional sensor signals may be included in the sum signal, such as formed by using formula (1) above. Additionally or alternatively, these additional sensor signals may be implemented into the formation of difference signals, such as according to formulae (2) and/or (3) above. As an example, difference signals between two photosensitive elements1212located at opposing corners1214may be formed and/or difference signals between one photosensitive element1212located at a corner1214and one photosensitive element located at a straight edge, e.g. a straight rim portion, may be formed. The difference signal D, in each case, may denote a location of the light spot on an axis interconnecting the two photosensitive elements. Further, the embodiment ofFIGS.26A and26Bshows a variation of the placement of the photosensitive elements1182,1184,1186,1188,1212with respect to the fluorescent waveguiding sheet1174. Thus, in the embodiment ofFIGS.22A and22B, the photosensitive elements1182,1184,1186,1188may be located within the plane of the fluorescent waveguiding sheet1174. Additionally or alternatively, as shown in the embodiment ofFIGS.26A and26B, some or even all of the photosensitive elements1182,1184,1186,1188,1212may be located outside the plane of the fluorescent waveguiding sheet1174. Specifically, as shown in the cross-sectional view ofFIG.26B, as an example, the photosensitive elements1182,1184,1186,1188,1212may be optically coupled to the fluorescent waveguiding sheet1174by optical coupling elements1216. As an example, the photosensitive elements1182,1184,1186,1188,1212simply may be glued to the fluorescent waveguiding sheet1174by using one or more transparent adhesives, such as an epoxy adhesive. Further, the embodiment ofFIGS.26A and26Bshows a variation of the size and shape of the photosensitive elements1182,1184,1186,1188,1212. Thus, the photosensitive elements1182,1184,1186,1188,1212do not necessarily have to be strip-shaped photosensitive elements. As an example, very small photodiodes may be used, such as rectangular photodiodes or even point-like or spot-like photodiodes. As outlined above, a small size of the photodiodes generally may lead to a lower electrical capacitance and, thus, may lead to a faster response of the second optical sensor1120. Further, the embodiment ofFIGS.26A and26Bshows that no reference photosensitive element1200is necessary. Thus, as discussed above, the sum signal itself may replace the function of the reference photosensitive element1200. Thus, the second optical sensor1120as shown in the embodiment ofFIGS.26A and26Bprovides a fully functional and, optionally, transparent PSD. No further PSDs are required. FIGS.27A and27B show a schematic view of a further exemplary embodiment of a detector110according to the present invention. InFIG.27A, the detector110comprises at least two optical sensors113, for example a first optical sensor118and a second optical sensor120, each having at least one light-sensitive area121. The optical detector110, further, comprises at least one transfer device128, such as at least one lens or a lens system, specifically for beam shaping. The transfer device128has an optical axis129, wherein the transfer device128and the optical detector preferably may have a common optical axis. The detector110may comprise at least one illumination source136. The illumination source136, as an example, may comprise a laser source. The illumination source136may be arranged such that the illumination light beam138is one or both of non-parallel to the optical axis126, but off-axis, or shifted from the optical axis126. The illumination source136may be configured for generating at least one illumination light beam138for illuminating the object112. The illumination light beam138is fully or partially reflected by the object112and travels back towards the detector110, thereby forming the light beam116. The light beam116propagates from the object112towards the first optical sensor118and the second optical sensor120. The first optical sensor118may comprise a first light-sensitive area122, and the second optical sensor120may comprise a second light-sensitive area124. In this embodiment the optical sensors118,120may be arranged such that the light-sensitive areas122,124have identical surface areas. For example, the optical sensors118,120may be identical. The detector110may further comprise the reflective element140, such as at least one beam splitter, which is adapted to lead the light beam116from the transfer device128to both of the optical sensors118,120. The first optical sensor118may have a distance db1from the beam splitter and the second optical sensor120may have a distance db2from the beam splitter, wherein db1≠db2. Again, an evaluation device132may be used, having, e.g., at least one divider142for forming the quotient signal Q, and, as an example, at least one position evaluation device144, for deriving the at least one longitudinal coordinate z from the quotient signal Q. It shall be noted that the evaluation device132may fully or partially be embodied in hardware and/or software. Thus, as an example, one or more of components142,144may be embodied by appropriate software components. InFIG.27B, the detector110comprises at least two optical sensors113, for example a first optical sensor118and a second optical sensor120, each having at least one light-sensitive area121. The optical detector110, further, may comprise at least one transfer device128, such as at least one lens or a lens system. The transfer device128has an optical axis129, wherein the transfer device128and the optical detector preferably may have a common optical axis. The detector110may comprise at least one illumination source136. The illumination source136, as an example, may comprise a laser source, for example with a 1550 nm laser source. The illumination source136may be arranged such that the illumination light beam138is one or both of non-parallel to the optical axis126, but off-axis, or shifted from the optical axis126. The illumination source136may be configured for generating at least one illumination light beam138for illuminating the object112. The illumination light beam138is fully or partially reflected by the object112and travels back towards the detector110, thereby forming the light beam116. The light beam116propagates from the object112towards the first optical sensor118and the second optical sensor120. The first optical sensor118may comprise the first light-sensitive area122, and the second optical sensor120may comprise the second light-sensitive area124. As can be seen inFIG.27B, the first optical sensor118is a small optical sensor, whereas the second optical sensor120is a large optical sensor. The optical sensors118,120may be Ge-sensors. The first optical sensor118may have a first distance from the transfer device128and the second optical sensor120may have a second distance from the transfer device128. InFIG.27B, the first optical sensor118may be close to the transfer device128, whereas the second optical sensor120may be arranged further away in direction to the focus. The first optical sensor118may be arranged such that, independent from a distance from the object, a sensor signal of the first optical sensor118may be proportional to the total power of the light beam passing the transfer device128. Again, an evaluation device132may be used, having, e.g., at least one divider142for forming the quotient signal Q, and, as an example, at least one position evaluation device144, for deriving the at least one longitudinal coordinate z from the quotient signal Q. It shall be noted that the evaluation device132may fully or partially be embodied in hardware and/or software. Thus, as an example, one or more of components142,144may be embodied by appropriate software components. InFIG.28experimental results of a distance determination with the detector110is shown. In this experimental setup, the transfer device128was a plano-convex lens having a focal length of 150 mm, a diameter of 75 mm and coated with an anti-reflective coating for a range of 1050-1700 nm, available as Thorlabs LA1002-C. The object112, in this case a piece of carpet, was illuminated by a laser diode with 30 mW CW-power output at a wavelength of 1550 nm, available as Schäfter+Kirchhoff 55 cm-1550-30-Q04-T12-C-6. The illumination source136was placed laterally next to the transfer device and was operated at 367 Hz with a 50:50 rectangle modulation. A second optical sensor120, in this experimental setup, a Ge photodiode with dimensions of 10 mm×10 mm, available as Thorlabs FDG1010, was arranged directly on the transfer device, and a first optical sensor118having a diameter of 5 mm, available as Thorlabs FDG05 was placed with a distance of 0.85 m from the transfer device128.FIG.28shows a dependency of distance d in m, corresponding to the longitudinal coordinate z of the object, of the determined quotient signal Q. InFIG.29, a further exemplary embodiment of the detector110is depicted. For details of the optical sensor113reference is made toFIG.6above. As inFIGS.27Aand B, the illumination source136may be positioned off-axis. The illumination source136may be adapted to generate and/or to project a cloud of points, for example the illumination source136may comprise one optical element194, in particular one or more optical elements selected from the group consisting of at least one digital light processing (DLP) projector, at least one LCoS projector, at least one spatial light modulator; at least one diffractive optical element; at least one array of light emitting diodes; at least one array of laser light sources. The sensor element115may comprise a matrix117of optical sensors113, each optical sensor113having at least one light-sensitive area121facing the object112. The sensor element115may comprise at least one CMOS sensor. InFIG.30, schematically the cloud of points impinging on the sensor element115is depicted. Additionally, disturbances may be present on the matrix117such as disturbances due to speckles and/or extraneous light and/or multiple reflections. The evaluation device132may be adapted to determine at least one region of interest196, for example one or more pixels illuminated by the light beam116which are used for determination of the longitudinal coordinate of the object112. InFIG.30, regions of interest196are shown exemplary as circular areas with dashed lines. For example, the evaluation device132may be adapted to perform a filtering method, for example, a blob-analysis and/or object recognition method. FIGS.31Ato O show further exemplary configurations of optical sensors according to the present invention, in particular top view in direction of propagation of the light beam116. InFIG.31A, a top view of two rectangular optical sensors113is shown, wherein the first optical sensor118is a small optical sensor in front of a larger second optical sensor120. The first optical sensor118and the second optical sensor120may be arranged with a different offset, in particular in a transversal direction y, from the optical axis126. InFIGS.31B and31C, top view of a large rectangular optical sensor120is shown, wherein the first optical sensor118is a small optical sensor in front of a larger second optical sensor120having a triangle shaped (FIG.31B) or star-shaped (FIG.31C) light-sensitive area121. InFIGS.31M to O, a top view of two rectangular optical sensors113is shown, wherein the first optical sensor118and the second optical sensor120are rectangular sensors with the same size. InFIGS.31M to O a mask119is arranged in front of the first and second optical sensors118,120. The mask119may be arranged with a different offset from the optical axis126. The mask119may have an arbitrary size and shape, for example, the mask may be rectangular shaped (FIG.31M), triangle shaped (FIG.31N) or star-shaped (FIG.31O). However, other sizes and shapes are feasible. Mask119may be adapted to prevent light impinging on the light sensitive areas of the first and second optical sensors118,120. If used in a situation comparable to the situation illustrated inFIG.19A, the mask may result in a further z-dependent decrease of a decreasing sensor signal, resulting in an increased z-dependency of the resulting quotient signal Q. The first optical sensor118and the second optical sensor120may be arranged with a different offset from the optical axis126.FIG.31Kshows two circular shaped optical sensors113, wherein the first optical sensor118is a small optical sensor in front of the larger second optical sensor120. InFIGS.31D, the light sensitive area of the first optical sensor118is square-shaped, and the light sensitive area of the second optical sensor120is rectangular, such that the surface areas in x and y differ. In addition, a center of the first optical sensor118and a center of second optical sensor120may have different x coordinates such that the optical sensors118,120may have different spatial offset in one or more of x and y direction from the optical axis. InFIG.31H, both the first optical sensor118and the second optical sensor120may be rectangular. The first optical sensor118and the second optical sensor120may be arranged such that the center of the first optical sensor118and the center of second optical sensor120may have different x coordinates and that the surface areas in x and y differ. The first optical sensor118and the second optical sensor120may be arranged with a different offset from the optical axis126. InFIG.31L, the first optical sensor118may have a deviating shape from the shape of the second optical sensor120such as a circular or semicircular shape.FIGS.31E, F, G, I, J show sensor element115having the matrix of pixels117. InFIGS.31E, F, G the sensor element115has a rectangular shape, whereas inFIGS.31I and J the sensor element115has a circular shape. Rows and columns may be arranged equidistant or non-equidistant. In case of equidistant rows and/or columns the sensor element115may be arranged with a spatial offset to the optical axis126. FIG.32shows experimental results of a determination of a longitudinal coordinate z for different object sizes. The experimental setup was comparable to the setup shown inFIG.19A. In the measurement setup the object112, a paper target, was illuminated by laser136with a wavelength of 905 nm, 1.6 mW and modulated with 23 Hz. Light reflected from the object112was led to a quadrant diode178, available as OSI Optoelectronics, OSI Spot-4D. Between the object112and quadrant diode178a lens128having an aspherical effective focal length of 20.0 mm, a diameter of 25.0 mm was placed, available as Thorlabs AL2520M-B. A distance from quadrant diode178to lens128was 19.7 mm and the quadrant diode178had an offset from the optical axis in y=0.5 mm. Further, different from the situation inFIG.19Aand not shown inFIG.19A, in the situation ofFIG.32, an iris diaphragm or a further lens was placed in front of the laser136between the laser136and the object112, to modify the illumination light beam138. The iris diaphragm was used to modify the width of the illumination light beam138. The further lens was used to obtain a diverging illumination light beam138with a beam width decreasing with the distance from the laser136.FIG.32shows the quotient Q of two adjacent quadrant currents as a function of the distance, i.e. longitudinal coordinate of the object112, z in mm. In a first experiment, a diameter of a illumination light beam138was varied by an iris diaphragm from 1 mm, solid line, to 3.5 mm, loosely dashed line, and to 5 mm, dash-dot line. In a second experiment, the diameter of the illumination light beam138was varied by the further lens such that the beam width of the illumination light beam138diverges with increasing distance from the further lens. To characterize the diverging illumination light beam138, the beam width at 1 m, 2 m, and 3 m from the lens128is given. The dashed line shows the quotient Q, wherein the beam width was 10 mm at 1 m distance, 16 mm at 2 m distance and 22 mm at 3 m distance from the lens128. The dotted line shows the quotient Q, wherein the beam width was 15 mm at 1 m distance, 32 mm at 2 m distance and 49 mm at 3 m distance from the lens128. Below z=2300 mm all curves show the same dependency of Q from z and deviations below ±5% and thus independence from the beam width. In the situation ofFIG.32the beam width at the object112corresponds to the object size that is measured. The independence of the quotient Q from the beam width and thus from the object size clearly demonstrates the property of object size independence. In an application, the influence of the further lens leading to a diverging illumination light beam may be caused by a liquid drop, or rain, or dirt or the like such as on the laser module. Thus, object size independence is an important property for robust measurements. FIGS.33Aand B show an exemplary beam profile and determination of first area198and second area200of the beam profile. InFIG.33Anormalized intensity Inormas a function of the transversal coordinate x in mm is depicted. The object size was 20 mm and the distance object to sensor was 1200 mm. The first area198of the beam profile may comprise essentially edge information of the beam profile and the second area200of the beam profile may comprise essentially center information of the beam profile. The beam profile may have a center, a maximum value of the beam profile and/or a center point of a plateau of the beam profile. InFIG.33Athe center of the plateau may be at 500 mm. The beam profile may further comprise falling edges extending from the plateau. The second area200may comprise inner regions of the cross section and the first area198may comprise outer regions of the cross section. At least one area of the beam profile may be determined and/or selected as first area198of the beam profile if it comprises at least parts of the falling edges of the cross section. InFIG.33A, the first area198at both sides from the center is depicted in dark grey. At least one area of the beam profile may be determined and/or selected as second area200of the beam profile if it is close or around the center and comprises essentially center information. InFIG.33A, the second area200is depicted in light grey.FIG.33Bshows the corresponding light spot of the intensity distribution as shown inFIG.33Aand the corresponding first area198, and second area200. FIG.34shows a further exemplary embodiment of the detector110. The optical sensors113may comprise the first optical sensor118having the first light sensitive area122and the second optical sensor120having the second light sensitive area124. The first light sensitive area122and the second light sensitive area124are arranged such that a condition ac≠bd is satisfied. “a” is a ratio of photons hitting both an inner region202of a plane204perpendicular to the optical axis126intersecting the optical axis126at a distance equal to half of a focal length f of the transfer device128and the first light sensitive area122. “b” is a ratio of photons hitting both the inner region202of the plane204and the second light sensitive area124. “c” is a ratio of photons hitting both an outer region206of the plane204and the first light-sensitive area122. “d” is a ratio of the photons hitting both the outer region206of the plane204and the second light sensitive area124. The inner region202may have an area with a geometrical center point on the optical axis126and an extension such that half of the photons hit the plane204within the inner region202and the other half hit the plane outside the inner region202. The inner region202may be designed as a circle with a center point on the optical axis126and a radius r which is chosen such that half of the photons hit the plane204within the circle and the other half hit the plane outside the circle. InFIG.25, a schematic view of an exemplary embodiment of a detector2110for determining a position of at least one object2112is depicted. InFIG.35, the object2112is depicted for two different object distances. The detector2110comprises at least two optical sensors2113, for example a first optical sensor2118and a second optical sensor2120, each having at least one light-sensitive area2121. The object2112may comprise at least one beacon device2114, from which a light beam2116, also denoted as incident light beam, propagates towards the detector2110. Additionally or alternatively, the detector may comprise at least one illumination source2115for illuminating the object2112. As an example, the illumination source2115may be configured for generating an illuminating light beam for illuminating the object2112. Specifically, the illumination source2115may comprise at least one laser and/or laser source. Various types of lasers may be employed, such as semiconductor lasers. Additionally or alternatively, non-laser light sources may be used, such as LEDs and/or light bulbs. The illumination source2115may comprise an artificial illumination source, in particular at least one laser source and/or at least one incandescent lamp and/or at least one semiconductor light source, for example, at least one light-emitting diode, in particular an organic and/or inorganic light-emitting diode. As an example, the light emitted by the illumination source2115may have a wavelength of 300-500 nm. Additionally or alternatively, light in the infrared spectral range may be used, such as in the range of 780 nm to 3.0 μm. Specifically, the light in the part of the near infrared region where silicon photodiodes are applicable specifically in the range of 700 nm to 1000 nm may be used. Further, the illumination source2115may be configured for emitting modulated or non-modulated light. In case a plurality of illumination sources2115is used, the different illumination sources may have different modulation frequencies which, as outlined in further detail below, later on may be used for distinguishing the light beams. The first optical sensor2118may comprise a first light-sensitive area2122, and the second optical sensor2120may comprise a second light-sensitive area2124. The light beam2116, as an example, may propagate along an optical axis2126of the detector2110. Other embodiments, however, are feasible. The first light-sensitive area2122and the second light-sensitive area may be oriented towards the object2112. The optical detector2110, further, may comprise at least one transfer device2128, such as at least one lens or a lens system, specifically for beam shaping. The transfer device2128may have at least one focal length in response to the incident light beam2116propagating from the object2112to the detector2110. The transfer device2128may have an optical axis2129, wherein the transfer device2128and the optical detector preferably may have a common optical axis. The transfer device2128may constitute a coordinate system. A direction parallel or anti-parallel to the optical axis2126,2129may be defined as a longitudinal direction, whereas directions perpendicular to the optical axis2126,2129may be defined as transversal directions, wherein a longitudinal coordinate l is a coordinate along the optical axis2126,2129and wherein d is a spatial offset from the optical axis2126,2129. Consequently, the light beam2116is focused, such as in one or more focal points, and a beam width of the light beam2116may depend on a longitudinal coordinate z of the object2112, such as on a distance between the detector2110and the beacon device2114and/or the object2112. The optical sensors2118,2120may be positioned off focus. For details of this beam width dependency on the longitudinal coordinate, reference may be made to one or more of the WO 2012/110924 A1 and/or WO 2014/097181 A1. The detector comprises at least one angle dependent optical element2130adapted to generate at least one light beam2131having at least one beam profile depending on an angle of incidence of an incident light beam propagating from the object2112towards the detector2110and illuminating the angle dependent optical element2130. The angle dependent optical element2130may have angle dependent transmission properties such that an electromagnetic wave impinging on a first side2132, for example a surface and/or an entrance, of the angle dependent optical element2130may be partly, depending on the properties of the angle dependent optical element, absorbed and/or reflected and/or transmitted. A degree of transmission may be defined as quotient of transmitted power of the electromagnetic wave, i.e. the power behind the angle dependent optical element2130, and the incident power of the electromagnetic wave, i.e. the power before impinging on the angle dependent optical element2130. The angle dependent optical element2130may be designed such that the degree of transmission depends on an angle of incidence at which the incident light beam propagating from the object towards the detector2116impinges on the angle dependent optical element2130. The angle of incident may be measured with respect to an optical axis of the angle dependent optical element2130. The angle dependent optical element2130may be arranged in the direction of propagation behind the transfer device2128. The transfer device may, for example, comprise at least one collimating lens. The angle dependent optical element2130may be designed to weaken rays impinging with larger angles compared to rays impinging with a smaller angle. For example, the degree of transmission may be highest for light rays parallel to the optical axis, i.e. at 0°, and may decrease for higher angles. In particular, at at least one cut-off angle the degree of transmission may steeply fall to zero. Thus, light rays having a large angle of incidence may be cutoff. The angle dependent optical element2130may comprise at least one optical element selected from the group consisting of: at least one optical fiber, in particular at least one multifurcated optical fiber, in particular at least one bifurcated optical fiber; at least one diffractive optical element; at least one angle dependent reflective element, at least one diffractive grating element, in particular a blaze grating element; at least one aperture stop; at least one prism; at least one lens; at least one lens array, in particular at least one microlens array; at least one optical filter; at least one polarization filter; at least one bandpass filter; at least one liquid crystal filter, in particular a liquid crystal tunable filter; at least one short-pass filter; at least one long-pass filter; at least one notch filter; at least one interference filter; at least one transmission grating; at least one nonlinear optical element, in particular one birefringent optical element. The first optical sensor2118, in response to the illumination by the light beam2131, may generate a first sensor signal s1, whereas the second optical sensor2120may generate a second sensor signal s2. Preferably, the optical sensors2118,2120are linear optical sensors, i.e. the sensor signals s1and s2each are solely dependent on the total power of the light beam131or of the portion of the light beam2131illuminating their respective light-sensitive areas2122,2124, whereas these sensor signals s1and s2are independent from the actual size of the light spot of illumination. The sensor signals s1and s2are provided to an evaluation device2133of the detector2110. The evaluation device2133is embodied to derive a quotient signal Q, as explained above. From the quotient signal Q, derived by dividing the sensor signals s1and s2or multiples or linear combinations thereof, may be used for deriving at least one item of information on a longitudinal coordinate z of the object2112and/or the beacon device2114, from which the light beam2116propagates towards the detector2110. The evaluation device2133may have at least one divider2134for forming the combined signal Q, and, as an example, at least one position evaluation device2136, for deriving the at least one longitudinal coordinate z from the combined signal Q. It shall be noted that the evaluation device2133may fully or partially be embodied in hardware and/or software. Thus, as an example, one or more of components2134,2136may be embodied by appropriate software components. InFIG.36, a modification of the embodiment ofFIG.35is shown, which forms an alternative detector2110. The alternative embodiment of the detector2110widely corresponds to the embodiment shown inFIG.35. InFIG.36, the angle dependent optical element2130may comprise at least one optical fiber2138. The optical fiber2138may be adapted to transmit at least parts of incident light beam2116which are not absorbed and/or reflected, between two ends of the optical fiber. The optical fiber2138may have a length and may be adapted to permit transmission over a distance. The optical fiber2138may comprise at least one fiber core which is surrounded by at least one fiber cladding having a lower index of refraction as the fiber core. Below the angle of acceptance, the optical fiber2138may be adapted to guide the incoming light beam by total internal reflection. The optical fiber2138may be designed such that the degree of transmission may be highest for incoming light rays parallel, i.e. at an angle of 0°, to the optical fiber, neglecting reflection effects. The optical fiber2130may be designed such that for higher angles, for example angles from 1° to 10°, the degree of transmission may decrease smoothly to around 80% of the degree of transmission for parallel light rays and may remain at this level constantly up to an acceptance angle of the optical fiber2138. The optical fiber2138may be designed such that above the acceptance angle total reflection within the optical fiber2138is not possible such that the light rays are reflected out of the optical fiber2138. The optical fiber2138may be designed that at the acceptance angle, the degree of transmission may steeply fall to zero. Light rays having a large angle of incidence may be cut-off. As shown inFIG.36, the illumination source2115may be adapted to illuminate the object2112through the angle dependent optical element2130. The optical fiber2138may comprise at least one illumination fiber2140adapted to transmit the light beam2142generated by the illumination source2115such that it illuminates the object2112. The illumination source2115may be adapted to couple the at least one light beam2142generated by the illumination source2115into the illumination fiber2140. The optical fiber2138may comprise at least two or more fibers. The optical fiber2138may be at least one multifurcated optical fiber, in particular at least one bifurcated optical fiber. In the embodiment ofFIG.36, and as shown in the cut through inFIG.37, the optical fiber2138may comprise four fibers. In particular the optical fiber may comprise the illumination fiber2138and at least two fibers each for generating at least one light beam2131, denoted as first fiber2144and second fiber2146. As shown inFIG.37, the first fiber2144and the second fiber2146may be arranged close to each other at an entrance end2148of the optical fiber2138and may split into legs separated by a distance at an exit end2150of the optical fiber2138. The first fiber2144and second fiber2146may be designed as fibers having identical properties or may be fibers of different type. The first fiber2144may be adapted to generate at least one first light beam2152and the second fiber2146may be adapted to generate at least one second light beam2154. The optical fiber138may be arranged such that the incident light beam2116may impinge at a first angle of incidence into the first fiber2144and at a second angle of incidence, different from the first angle, into the second fiber2146, such that the degree of transmission is different for the first light beam2152and the second light beam2154. One of the optical sensors2113may be arranged at an exit end of the first fiber2144and the other optical sensor2113may be arranged at an exit end of the second fiber2146. The optical fiber may comprise more than three fibers, for example four fibers as depicted inFIG.37. It shall further be noted that the embodiments shown inFIGS.35and36simply provide embodiments for determining the longitudinal coordinate z of the object2112. It is also feasible, however, to modify the setups ofFIGS.35and36to provide additional information on a transversal coordinate of the object2112and/or of parts thereof. As an example, e.g. in between the transfer device2128and the optical sensors2118,2120, one or more parts of the light beam2116may be branched off, and may be guided to a position-sensitive device such as one or more CCD and/or CMOS pixelated sensors and/or quadrant detectors and/or other position sensitive devices, which, from a transversal position of a light spot generated thereon, may derive a transversal coordinate of the object2112and/or of parts thereof. The transversal coordinate may be used to verify and/or enhance the quality of the distance information. For further details, as an example, reference may be made to one or more of the above-mentioned prior art documents which provide for potential solutions of transversal sensors. FIG.38visualizes angle dependent transmission of an angle dependent optical element2130. The angle dependent optical element2130may be designed such that the degree of transmission depends on an angle of incidence at which the incident light beam propagating from the object towards the detector2116impinges on the angle dependent optical element2130. The angle dependent optical element2130may be designed to weaken rays impinging with larger angles compared to rays impinging with a smaller angle. In particular, at the cutoff angle the degree of transmission may steeply fall to zero and the light rays having a large angle of incidence may be cut-off. As shown inFIG.38regions of the incident light beam2116are cut-off by the angle dependent optical element2130in the generated light beam2131. FIG.39shows a dependency of the transmission power P in W of the optical fiber at constant irradiated power as a function of angle of incidence A in degree. The acceptance angle is shown as vertical line. The degree of transmission may be highest for incoming light rays parallel, i.e. at an angle of 0°, to the optical fiber, neglecting reflection effects. For higher angles, for example angles from 1° to 10°, the degree of transmission may decrease smoothly to around 80% of the degree of transmission for parallel light rays and may remain at this level constantly up to an acceptance angle of the optical fiber2138. At the acceptance angle, the degree of transmission may steeply fall to zero. Light rays having a large angle of incidence may be cutoff. FIGS.40A and40Bshow experimental results of distance measurements. In The determined distance zmeasin mm is shown as a function of the object distance zobjin mm. As illumination source115a Laser was used having a wavelength of 980 nm and average power of 2.4 mW available under Flexpoint® Laser components module. Two Si-photodetectors were used as optical sensors113. As optical fiber2138and transfer device2128available under Thorlabs Fixed Focus Collimation package F220SMA-980 was used. InFIG.40A, the solid line indicates where zmeas=zobj. For the measurement the object distance was varied and two different types of object were used, in particular a black paper object, curve2156(dotted line), and a white paper object, curve2158(dashed line). The determined object distance is in agreement with the real distance within 2% for small and medium distances and within 10% for large distances. InFIG.40B, the combined signal Q determined by dividing the signals of the two-photodetectors as a function of the distance zobjin mm is shown for the black paper object (dotted line) and the white paper object (dashed line). The determined quotient for both object types is in agreement within 2% for small and medium distances and within 10% for large distances. FIG.41shows, in a highly schematic illustration, an exemplary embodiment of a detector2110, for example according to the embodiments shown inFIG.35or36. The detector2110specifically may be embodied as a camera2156and/or may be part of a camera2156. The camera156may be made for imaging, specifically for 3D imaging, and may be made for acquiring standstill images and/or image sequences such as digital video clips. Other embodiments are feasible. FIG.41further shows an embodiment of a detector system2158, which, besides the at least one detector2110, comprises one or more beacon devices2114, which, in this example, may be attached and/or integrated into an object2112, the position of which shall be detected by using the detector2110.FIG.41further shows an exemplary embodiment of a human-machine interface2160, which comprises the at least one detector system2158and, further, an entertainment device2162, which comprises the human-machine interface2160. The figure further shows an embodiment of a tracking system2164for tracking a position of the object2112, which comprises the detector system2158. The components of the devices and systems shall be explained in further detail below. FIG.41further shows an exemplary embodiment of a scanning system2166for scanning a scenery comprising the object2112, such as for scanning the object2112and/or for determining at least one position of the at least one object2112. The scanning system2166comprises the at least one detector2110, and, further, optionally, the at least one illumination source2115as well as, optionally, at least one further illumination source2115. The illumination source2115, generally, is configured to emit at least one illumination light beam2142, such as for illumination of at least one dot, e.g. a dot located on one or more of the positions of the beacon devices2114and/or on a surface of the object2112. The scanning system2166may be designed to generate a profile of the scenery including the object2112and/or a profile of the object2112, and/or may be designed to generate at least one item of information about the distance between the at least one dot and the scanning system2166, specifically the detector2110, by using the at least one detector2110. As outlined above, an exemplary embodiment of the detector2110which may be used in the setup ofFIG.41is shown inFIGS.35and36. Thus, the detector2110, besides the optical sensors2118,2120, comprises at least one evaluation device2133, having e.g. the at least one divider2134and/or the at least one position evaluation device2136, as symbolically depicted inFIG.41. The components of the evaluation device2133may fully or partially be integrated into a distinct device and/or may fully or partially be integrated into other components of the detector2110. Besides the possibility of fully or partially combining two or more components, one or more of the optical sensors2118,2120and one or more of the components of the evaluation device2133may be interconnected by one or more connectors2168and/or by one or more interfaces, as symbolically depicted inFIG.41. Further, the one or more connectors2168may comprise one or more drivers and/or one or more devices for modifying or preprocessing sensor signals. Further, instead of using the at least one optional connector2168, the evaluation device133may fully or partially be integrated into one or both of the optical sensors2118,2120and/or into a housing2170of the detector2110. Additionally or alternatively, the evaluation device2133may fully or partially be designed as a separate device. In this exemplary embodiment, the object2112, the position of which may be detected, may be designed as an article of sports equipment and/or may form a control element or a control device2172, the position of which may be manipulated by a user2174. As an example, the object2112may be or may comprise a bat, a racket, a club or any other article of sports equipment and/or fake sports equipment. Other types of objects2112are possible. Further, the user2174himself or herself may be considered as the object2112, the position of which shall be detected. As outlined above, the detector2110comprises at least the optical sensors2118,2120. The optical sensors2118,2120may be located inside the housing2170of the detector2110. Further, the at least one transfer device2128is comprised, such as one or more optical systems, preferably comprising one or more lenses. An opening2176inside the housing2170, which, preferably, is located concentrically with regard to the optical axis2126of the detector2110, preferably defines a direction of view2178of the detector2110. A coordinate system2180may be defined, in which a direction parallel or anti-parallel to the optical axis2126may be defined as a longitudinal direction, whereas directions perpendicular to the optical axis126may be defined as transversal directions. In the coordinate system2180, symbolically depicted inFIG.41, a longitudinal direction is denoted by z, and transversal directions are denoted by x and y, respectively. Other types of coordinate systems2180are feasible, such as non-Cartesian coordinate systems. The detector2110may comprise the optical sensors2118,2120as well as, optionally, further optical sensors. The optical sensors2118,2120may be located in one and the same beam path, for example one behind the other, such that the first optical sensor2118covers a portion of the second optical sensor2120. Alternatively, however, a branched beam path may be possible, for example using a multifurcated optical fiber. The branched beam path may comprise additional optical sensors in one or more additional beam paths, such as by branching off a beam path for at least one transversal detector or transversal sensor for determining transversal coordinates of the object2112and/or of parts thereof. Alternatively, however, the optical sensors2118,2120may be located at the same longitudinal coordinate. One or more light beams2116are propagating from the object2112and/or from one or more of the beacon devices2114, towards the detector2110. The detector2110is configured for determining a position of the at least one object2112. For this purpose, as explained above in the context ofFIGS.35to40, the evaluation device2133is configured to evaluate sensor signals provided by the optical sensors2118,2120. The detector2110is adapted to determine a position of the object2112, and the optical sensors2118,2120are adapted to detect the light beam2131. In case no illumination source2115is used, the beacon devices2114and/or at least one of these beacon devices2114may be or may comprise active beacon devices with an integrated illumination source such as a light-emitting diode. In case the illumination source2115is used, the beacon devices2114do not necessarily have to be active beacon devices. Contrarily, a reflective surface of the object2112may be used, such as integrated reflected beacon devices2114having at least one reflective surface such as a mirror, retro reflector, reflective film, or the like. The light beam2116, directly and/or after being modified by the transfer device2128, such as being focused by one or more lenses, impinges on the angle dependent element2130which generates the at least one light beam which illuminates the light-sensitive areas2122,2124of the optical sensors2118,2120. For details of the evaluation, reference may be made toFIGS.35to40above. As outlined above, the determination of the position of the object2112and/or a part thereof by using the detector2110may be used for providing a human-machine interface2160, in order to provide at least one item of information to a machine2182. In the embodiments schematically depicted inFIG.41, the machine2182may be a computer and/or may comprise a computer. Other embodiments are feasible. The evaluation device2133may even be fully or partially integrated into the machine2182, such as into the computer. As outlined above,FIG.41also depicts an example of a tracking system2164, configured for tracking the position of the at least one object2112and/or of parts thereof. The tracking system2164comprises the detector2110and at least one track controller2184. The track controller2184may be adapted to track a series of positions of the object2112at specific points in time. The track controller2184may be an independent device and/or may be fully or partially integrated into the machine2182, specifically the computer, as indicated inFIG.41and/or into the evaluation device2133. Similarly, as outlined above, the human-machine interface2160may form part of an entertainment device2162. The machine2182, specifically the computer, may also form part of the entertainment device2162. Thus, by means of the user2174functioning as the object2112and/or by means of the user2174handling a control device2172functioning as the object2112, the user2174may input at least one item of information, such as at least one control command, into the computer, thereby varying the entertainment functions, such as controlling the course of a computer. Referring toFIG.42, the detector110,1110,2110, such as the detector as described with respect toFIGS.1to41may be adapted to determine depth information, in particular absolute depth information, from a radiance ratio of at least two asymmetric regions of a light beam profile on the at least two optical sensors113,1118,1120,2113. For example, the detector110,1110,2110may comprise a plurality of optical sensors arranged in the matrix117. The detector110,1110,2110may be adapted to determine depth information from a radiance ratio of at least two asymmetric regions within an enclosed, in particular, defocused beam profile captured by a single matrix of optical sensors such as a CMOS detector. In particular, the detector110,1110,2110may be adapted to determine the depth information using the radiance ratio independent of a certain object size range. As outlined above, this principle is called Distance by Photon Ratio (DPR). In one embodiment, the light beam116,1116may illuminate the sensor element with at least one pattern comprising at least one feature point. The feature point may be selected from the group consisting of: at least one point, at least one line, at least one edge. The pattern may be generated by the object, for example, in response to an illumination by the at least one light source with an illumination pattern comprising the at least one pattern. The evaluation device132may be configured for deriving the quotient signal Q by Q⁡(zO)=∫∫A1⁢E⁡(x,y;zO)⁢dxdy∫∫A2⁢E⁡(x,y;zO)⁢dxdywherein x and y are transversal coordinates, A1and A2are areas of the beam profile at the sensor position, and E(x,y,zo) denotes the beam profile given at the object distance zo. A1may correspond to a full or complete area of a feature point on the optical sensors. A2may be a central area of the feature point on the optical sensors. The central area may be a constant value. The central area may be smaller compared to the full area of the feature point. For example, in case of a circular feature point, the central area may have a radius from 0.1 to 0.9 of a full radius of the feature point, preferably from 0.4 to 0.6 of the full radius. In the embodiment shown inFIG.42, the light beam116,1116propagating from the object112,1112to the detector110,1110,2110may illuminate the matrix117with at least one line pattern2186. The line pattern2186may be generated by the object112,1112, for example in response to an illumination by the at least one illumination source136with an illumination pattern comprising at least one illumination line pattern. A1may correspond to an area with a full line width of the line pattern2186in the matrix117. The line pattern2186in the matrix117may be widened and/or displaced compared to the line pattern of the illumination pattern such that a line width in the matrix117is increased. In particular, the line width of the line pattern2186in the matrix117may change from one column to another column. A2may be a central area of the line pattern2186in the matrix117. The line width of the central area may be a constant value, and may in particular correspond to the line width in the illumination pattern. The central area may have a smaller line width compared to the full line width. For example, the central area may have a line width from 0.1 to 0.9 of the full line width, preferably from 0.4 to 0.6 of the full line width. The line pattern2186may be segmented in the matrix117. Each of the columns may comprise center information of intensity in the central area of the line pattern2186and edge information of intensity from regions extending further outwards from the central area to edge regions of the line pattern2186. FIG.43shows a simulation testing of object size independence for a one-dimensional case using computational ray tracing. In a simulation testing, an aspheric lens with f=10 mm, 10 mm pupil diameter in a distance range of 100 mm to 600 mm was used. Using this specification, object size independence of about 10 mm was obtained, such that the spot size was varied from 1 mm to 25 mm. InFIG.43, the quotient Q (z) over the longitudinal coordinate z is shown for 1 mm variation (curve2188), 2 mm variation (curve2190), 5 mm variation (curve2192), 15 mm variation (curve2194), 20 mm variation (curve2196) and 25 mm variation (curve2198). It can be seen that the quotient for object sizes above 10 mm deviates, whereas that of object sizes smaller than 10 mm yield identical ratios. This feature reduces calibration efforts for targets of varying size and is inherent to DPR analysis. Referring toFIGS.44Aand B, as outlined above, the detector110.1110,2110may comprise the at least one matrix117of optical sensors113,1118,1120,2113. With the aid of such a pixelated imaging device, a defocused beam profile may be subdivided into cross-sections along lines of a certain angle θ and with a distance co from the origin of ordinates as shown inFIG.44A. Accordingly, the parameterization of a single line would be given by ω=x cos(θ)+y sin(θ). The integration of the intensity along parallel lines can be mathematically described by an integral projection{⋅} of the well-known Radon transform which reads (ω,θ){f⁡(x,y)}=∫∫-∞∞⁢f⁡(x,y)⁢δ⁡(x⁢cos⁡(θ)+y⁢sin⁡(θ)-ω)⁢dxdywhere δ denotes the Dirac delta function and ƒ(x,y) is the intensity of an enclosed defocused beam profile. The photon ratio R for a given angle θ and projection width ω is then given by R=(ω,θ){f′(x,y)}(ω,θ){f⁡(x,y)}with ƒ′(x,y) as the overshined image region highlighted inFIG.44B. It is expected that the variation of θ yields different ratios R for skewed object surfaces. It may be sufficient to let θ vary in the following interval {θ∈+, θ<π}. FIGS.45Aand B show further embodiments of the detector110according to the present invention comprising at least one bi-cell. The illumination source136, such as a laser source, may generate the light beam138illuminating object112. The reflected light beam116may propagate from the object112to the transfer device128and may impinge on the bi-cell of optical sensors176. InFIG.45Aa side view is shown and inFIG.45Ba front view is shown. The detector110may comprise at least one FiP sensor adapted for generating the so called FiP effect as described in WO 2015/024871 or WO2016/120392. For example, the bi-cell inFIGS.45A and B may be adapted to generate a so called FiP signal. As outlined e.g. in WO 2015/024871 or WO2016/120392, the FiP signal can be used to determine depth information over a wide distance range. The FiP sensor may be adapted to exhibit a positive and/or a negative FiP effect. The negative FiP effect may be used to tune small image effects at high distances. Image changes such as position, size, shape, sharpness, etc. may vanish at high distances while the negative FiP effect increases. Furthermore, no luminance dependence may be introduced since both cells are at the same longitudinal position and thus receive identical photon density. FIG.46shows experimental results, in particular spot diameter independence and luminance independence of the combined sensor signal, determined using the detector setup shown inFIG.46. In particular, the bi-cell was a PbS—Bi-cell and a 1550 nm laser was used with a laser spot size of 4 mm. The baseline was 12.5 mm. The transfer device was a Thorlabs Asphere lens with focal length off=20 mm and diameter of D=25 mm.FIG.46shows quotient Q over the longitudinal coordinate z for different luminance and spot diameter, in particular for luminance of 2.6 mW and spot diameter of 12 mm (curve2200), 2.4 mW and 6 mm (curve2202) and 1.2 mW and spot diameter of 3 mm (curve2204). All curves show identical curve shape and thus, spot diameter independence. FIGS.47Ato C show three embodiments of a hexagonal illumination pattern. The illumination source136may be adapted to generate at least one illumination pattern for illuminating the object112. Additionally or alternatively, the illumination pattern may be generated by at least one ambient light source. Specifically, the illumination source136may comprise at least one laser and/or laser source. Various types of lasers may be employed, such as semiconductor lasers. Additionally or alternatively, non-laser light sources may be used, such as LEDs and/or light bulbs. The illumination pattern may comprise at least one feature such as a point or symbol. The illumination pattern may comprise a plurality of features. The illumination pattern may comprise an arrangement of periodic or non-periodic features. The illumination pattern may be generated by ambient light, such as by at least one ambient light source, or by the at least one illumination source. The illumination pattern may comprise at least one pattern selected from the group consisting of: at least one point pattern, in particular a pseudo-random point pattern, a random point pattern or a quasi random pattern; at least one Sobol pattern; at least one quasiperiodic pattern; at least one pattern comprising at least one pre-known feature; at least one regular pattern; at least one triangular pattern; at least one hexagonal pattern; at least one rectangular pattern at least one pattern comprising convex uniform tilings; at least one line pattern comprising at least one line; at least one line pattern comprising at least two lines such as parallel or crossing lines. For example, the illumination source may be adapted to generate and/or to project a cloud of points. The illumination pattern may comprise regular and/or constant and/or periodic pattern such as a triangular pattern, a rectangular pattern, a hexagonal pattern, or a pattern comprising further convex tilings. The illumination pattern may comprise as much as possible features per area such that hexagonal pattern may be preferred. A distance between two features of the illumination pattern and/or an area of the at least one illumination feature may depend on the circle of confusion in the image. The illumination features of the illumination pattern may be arranged such that only few reference features are positioned on an epipolar line. As shown inFIG.47A, the illumination pattern may comprise at least one hexagonal pattern, wherein the individual points are positioned on epipolar lines2206. As shown inFIG.47B, the illumination pattern may comprise at least one hexagonal pattern, wherein the pattern is rotated relative to the baseline. Such a positioning of the illumination features allows enhancing distance between the individual points on each epipolar line. For example as shown inFIG.47C, the illumination pattern may comprise at least one displaced hexagonal pattern, wherein individual points of the hexagonal pattern are displaced by a random distance from the regular position, for example orthogonal to the epipolar line of the point. The displacement of the individual points may be smaller than half of the distance between two parallel epipolar lines, preferably smaller than one fourth of the distance between two parallel epipolar lines. The displacement of the individual points may be as such that two points are not displaced above each other. Such a positioning allows to enhance the number of possible features per area. FIG.48shows an embodiment of a scanning device154. The scanning device154may be adapted as a line scanning device. In particular, the scanning device154may comprise at least one sensor line or row of optical sensors113. Furthermore, the scanning device154may comprise the at least one transfer device128and the at least one illumination source136. Triangulation systems require a sufficient baseline, however due to the baseline in the near field no detection may be possible. Near field detection may be possible if the light spot is tilted in direction of the transfer device. However, the tilting leads to that the light spot will move out of the field of view which limits detection in far field regions. Thus, in triangulation systems, the nonzero baseline will always lead to a substantial reduction in the measurement range, in the near field, and/or in the far field. Reducing the baseline as possible with the detector according to the present invention will thus always increase the measurement range. Further, these near field and far field problems can be overcome by using the scanning device154ofFIG.48. The scanning device154may be adapted to detect a plurality of light beams116propagating from the object112to the scanning device154on the CMOS line. The light beams116may be generated at different position on the object112or by movement of the object112. The scanning device154may be adapted to determine at least one longitudinal coordinate for each of the light points by determining the quotient signal Q as described above. FIG.49shows, in a highly schematic illustration, an example embodiment of a system300for determining at least one feature of at least one object312. In the example embodiment, the system300is adapted for recognition and/or authentication of the object312. The object312may be an animal or a human. For example, the object312may be a face or other body region of interest of a human, and the system300is adapted for image recognition or authentication of the body region (e.g., facial recognition or authentication). The object312may be located within a scene and/or may have a surrounding environment. The system300includes a detector310and a projector311, each of which are housed in a housing305in the example embodiment. In other examples, the detector310and the projector311may be housed in separate housings305. The detector310may specifically be embodied as a camera314and/or may be part of a camera314. The detector310and/or the camera314may be made for imaging, specifically for 3D imaging, and may be made for acquiring standstill images and/or image sequences such as digital video clips. Other embodiments are feasible. In some example embodiments, the detector310includes the same features and functionalities as detector110described in detail above. The projector311includes at least one first illumination source328. The first illumination source328, generally, emits at least one illumination light beam316, such as for illumination of at least one dot318, e.g. a dot318located on one or more of the positions on a surface of the object312. In the example embodiment, the first illumination source328emits a plurality of light beams316for illuminating a plurality or a cloud of dots318on the surface of the object312. Each of the dots318may be Gaussian-shaped or speckled. The relative size of the dots318shown inFIG.49is exaggerated for illustrative purposes. The first illumination source328may be adapted to generate at least one illumination pattern (e.g., a patterned array of the dots318) on the surface of the object312. The illumination pattern is reflected or scattered by the object312and, thereby, is at least partially directed towards the detector310. The first illumination source328may include at least one laser and/or laser source. Various types of lasers may be employed, such as semiconductor lasers. Additionally or alternatively, non-laser light sources may be used, such as LEDs and/or light bulbs. The pattern may include a plurality of features. The pattern may include an arrangement of periodic or non-periodic features. The illumination pattern may include at least one pattern selected from the group consisting of: at least one point pattern, in particular a pseudo-random point pattern; at least one pattern comprising at least one pre-known feature. For example, the first illumination source328may be adapted to generate and/or to project the cloud of points or dots318. The first illumination source328may include one or more of at least one light projector; at least one digital light processing (DLP) projector, at least one LCoS projector, at least one spatial light modulator; at least one diffractive optical element; at least one array of light emitting diodes; at least one array of laser light sources. The first illumination source328may include at least one light source adapted to generate the illumination pattern directly. The illumination pattern may comprise a plurality of illumination features. The illumination pattern may be selected from the group consisting of: at least one point pattern; at least one line pattern; at least one stripe pattern; at least one checkerboard pattern; at least one pattern comprising an arrangement of periodic or non periodic features. The illumination pattern may comprise regular and/or constant and/or periodic pattern such as a triangular pattern, a rectangular pattern, a hexagonal pattern or a pattern comprising further convex tilings. The illumination pattern may exhibit the at least one illumination feature selected from the group consisting of: at least one point; at least one line; at least two lines such as parallel or crossing lines; at least one point and one line; at least one arrangement of periodic or non-periodic feature; at least one arbitrary shaped featured. The illumination pattern may comprise at least one pattern selected from the group consisting of: at least one point pattern, in particular a pseudo-random point pattern; a random point pattern or a quasi random pattern; at least one Sobol pattern; at least one quasiperiodic pattern; at least one pattern comprising at least one pre-known feature at least one regular pattern; at least one triangular pattern; at least one hexagonal pattern; at least one rectangular pattern at least one pattern comprising convex uniform tilings; at least one line pattern comprising at least one line; at least one line pattern comprising at least two lines such as parallel or crossing lines. The first illumination source328may include the at least one light projector adapted to generate a cloud of points or dots318such that the illumination pattern may comprise a plurality of points pattern. The first illumination source328may comprise at least one mask adapted to generate the illumination pattern from at least one light beam generated by the first illumination source328. The first illumination source328may illuminate the at least one object312with the illumination pattern. The illumination pattern may comprise a plurality of points or dots318as illumination features. In the example embodiment, the first illumination source328is a laser source328configured to emit the at least one illumination light beam316. The laser source328may emit the at least one light beam316in the infrared spectral range. It shall be noted, however, that other spectral ranges are feasible, additionally or alternatively. Various types of lasers may be employed as the laser source328, such as semiconductor lasers, double heterostructure lasers, external cavity lasers, separate confinement heterostructure lasers, quantum cascade lasers, Distributed Bragg Reflector lasers, polariton lasers, hybrid silicon lasers, extended cavity diode lasers, quantum dot lasers, volume Bragg grating lasers, Indium Arsenide lasers, transistor lasers, diode pumped lasers, distributed feedback lasers, quantum well lasers, interband cascade lasers, Gallium Arsenide lasers, semiconductor ring laser, extended cavity diode lasers, or vertical cavity surface-emitting lasers (VCSELs). The laser source328may also be a tunable laser source, that is, a laser source having at least property which can be controlled and/or adjusted. For example, the tunable laser source328may comprise one or more of a semiconductor tunable laser, a Sample Grating Distributed Bragg Reflector laser (SG-DBR), an external cavity laser, for example using a (Micro Electro Mechanical System) MEMS structure, a diode laser, a VCSEL, a VCSEL array, a distributed feedback laser, or the like. The tunable laser source328may be tunable over a wavelength range from 350 to 1500 nm, preferably from 400 to 1100 nm, more preferably from 700 to 1000 nm, most preferably from 980-770 nm. The tunable laser source328may include a driver (not shown), specifically a tunable driver, and the projector311may include at least one control unit (not shown) to control the at least one property of the tunable laser source328(for example, by applying an electric signal to the tunable laser source328). The at least one property of the tunable laser source may be at least one property selected from the group consisting of a voltage, a current, a temperature, an emission wavelength, an intensity and the like. For example, the emission wavelength of the tunable laser source328may be adjustable by one or more of varying a driver current, changing a MEMS state, changing the modulation of an electro-optical or an acousto-optical modulator or the like. In particular, the emission wavelength of the coherent at least one light beam318emitted by the tunable laser source328may depend on the driver current by which the tunable laser source is driven and/or the temperature. In some examples, the first illumination source328may be embodied as a plurality of tunable laser sources328. Further, the laser source328may emit modulated or non-modulated light. In case a plurality of tunable laser sources is used, the different tunable laser sources may have different modulation frequencies which later on may be used for distinguishing the light beams, specifically the respective illumination pattern. Additionally or alternatively, non-laser light sources may be used as the first illumination source328, such as LEDs and/or light bulbs. On account of their generally defined beam profiles and other properties of handleability, the use of at least one laser source as the first illumination source328is particularly preferred. In the embodiment shown inFIG.49, the system300further comprises a second illumination source338. The second illumination source338emits an illuminating light beam320for illuminating the object312. The second illumination source338may include at least one light source, such as a plurality of light sources. The second illumination source338may include an artificial illumination source, in particular at least one laser source and/or at least one incandescent lamp and/or at least one semiconductor light source, for example, at least one light-emitting diode, in particular an organic and/or inorganic light-emitting diode. As an example, the light emitted by the second illumination source338may have a wavelength of 300 to 1100 nm, especially 500 to 1100 nm. Additionally or alternatively, light in the infrared spectral range may be used, such as in the range of 780 nm to 3.0 μm. Specifically, the light in the part of the near infrared region where silicon photodiodes are applicable specifically in the range of 700 nm to 1100 nm may be used. In one example, the second illumination source338is at least one light emitting diode, such as an array of light emitting diodes, that emits a floodlight320to illuminate the object312. In other examples, other light sources may be used as the second illumination source338, such as those described above for the first illumination source328. The second illumination source338may be configured for providing additional illumination for imaging, recognition and/or authentication of the object312. For example, the second illumination source338may be used in situations in which it is not possible or difficult for recording a reflection pattern from the illumination pattern, e.g., in cases where the object312is located in a dark or dimly lit surrounding environment, in order to ensure a good illumination and, thus, contrasts for two-dimensional images such that a two-dimensional image recognition is possible. For example, illumination of the object312can be extended by an additional flood illumination LED. The further illumination source may illuminate the object312, such as a face, with the LED and, in particular, without the illumination pattern, and an optical sensor330may be configured for capturing the two-dimensional image. The 2D image may be used for face detection and verification algorithm. The distorted image captured by the optical sensor can be repaired, if an impulse response of the display is known. The evaluation device may be configured for determining at least one corrected image I0by deconvoluting the second image I with a grating function g, wherein I=I0*g. The grating function is also denoted impulse response. The undistorted image can be restored by a deconvolution approach, e.g., Van-Cittert or Wiener Deconvolution. The display device may be configured for determining the grating function g. For example, the display device may be configured for illuminating a black scene with an illumination pattern comprising a small single bright spot. The captured image may be the grating function. This procedure may be performed only once such as during calibration. For determining a corrected image even for imaging through the display, the display device may be configured for capturing the image and use the deconvolution approach with the captured impulse response g. The resulting image may be a reconstructed image with less artifacts of the display and can be used for several applications, e.g. face recognition. Although the projector311is shown as a single assembly that includes the first illumination source328and the second illumination source338, it is contemplated that multiple projectors311may be used. In some examples, multiple projectors311may be used and each projector311contains either the first illumination source328or the second illumination source338. In other examples where multiple projectors311are used, each projector311may include the first illumination source328and the second illumination source338. The projector311may be operable such that the first illumination source328and the second illumination source338emit the respective light beams316and320at the same time, or in an alternating manner. For example, an illumination cycle of the projector311may include generating the at least one illumination pattern on the surface of the object312using the first illumination328and illuminating the object312with a floodlight using the second illumination source338in an alternating manner. Additionally or alternatively, the illumination source328may generate the at least one illumination pattern on the surface of the object312and, at the same time, the second illumination source338illuminates the object312with a floodlight320. The projector311may also include at least one optical element340that is impinged by the at least one light beam316and/or320emitted by the first illumination source328and the second illumination source338, respectively. The optical element340propagates the light beams316and/or320emitted by the respective illumination source328and338toward the object312. For example, the at least one element340includes a diffractive element, such as a lens or a multilens array for example, that diffracts, diffuses or scatters the impinging light beams316and/or320emitted by the respective illumination source328and338. In some embodiments, the projector311includes the at least one optical element340to generate and/or form the illumination pattern on the surface of the object312by diffracting, diffusing, or scattering the light beams316emitted by the first illumination source328, which may be a laser source328as described above. The projector311may include an equal number of laser sources328and diffractive optical elements340. The projector311may include one diffractive optical element340and one laser source328. Thus, the projector311may be configured to generate the illumination pattern using only one laser source328and one diffractive optical element340. FIG.50shows an example diffractive optical element400aused to generate and/or form the illumination pattern on the surface of the object312. The example diffractive optical element400amay be used as the diffractive optical element340shown inFIG.49. In the example embodiment, the diffractive optical element400aincludes a stacked array of lenses402. The stacked array of lenses402includes a first lens402a, a second lens402b, and a third lens402cin this example. In other embodiments, the stacked array of lenses402may include any number of lenses that enables the diffractive optical element400ato function as described herein. In some examples, a single lens402may be used. The stacked array of lenses402are disposed within a cavity404defined by a hood326of the projector311. The hood326is tubular and extends outward from the housing305of the projector311. The hood326is open at both ends to allow light beams316from the laser source328to impinge the stacked array of lenses402and be propagated toward the object312. The lenses402are stacked such that the first lens402ais disposed at a first end406of the cavity404, the third lens402cis disposed at a second end of the cavity404, and the second lens402bis interposed between the first lens402aand the third lens402c. Adjacent lenses402aand402band adjacent lenses402band402care spaced apart from one another a suitable distance. Moreover, the first lens402adisposed at the first end406of the cavity404is located proximate the laser source328such that pre-diffracted light beams316aemitted by the laser source328impinge the first lens402a, and are successively propagated through the second lens402band the third lens402c, and diffracted light beams316bexit the third lens402cand are propagated toward the object312to generate and/or form the illumination pattern. Suitably, the pre-diffracted light beams316aemitted by the laser source328are incident collimated laser beam rays. The illumination pattern may depend on the design of the diffractive optical element400a. Each of the first lens402a, the second lens402b, and the third lens402cis selected to have a suitable size and shape for generating and/or forming the illumination pattern. For example, the lenses402a-cmay be suitably sized and shaped to generate and/or form illumination patterns that include regular and/or constant and/or periodic patterns such as a triangular pattern, a rectangular pattern, a hexagonal pattern, or a pattern comprising further convex tilings. The illumination patterns may include as many features per area as possible such that a hexagonal pattern may be preferred. Example hexagonal patterns are illustrated inFIGS.47A-Cand described in further detail herein. A distance between two features of the respective illumination pattern and/or an area of the at least one illumination feature may depend on a circle of confusion in an image. Each of the lenses402a-402cmay be selected from a focus-tunable lens; an aspheric lens; a spheric lens; a Fresnel lens; a concave lens, including a plano-concave and a biconcave lens; a convex lens, including a plano-convex and a biconvex lens; and a meniscus lens. FIG.51shows another example diffractive optical element400bused to generate and/or form the illumination pattern on the surface of the object312. The example diffractive optical element400bmay be used as the diffractive optical element340shown inFIG.49. In the example embodiment, the diffractive optical element400bincludes a lens402and a diffractive plate403. Like the stacked array of lenses402of the diffractive optical element400a(FIG.50), the lens402and the diffractive plate403are disposed within a cavity404defined by a hood326of the projector311. The hood326is tubular and extends outward from the housing305of the projector311. The hood326is open at both ends to allow light beams316afrom the laser source328to impinge the lens402and, subsequently, the diffractive plate403, and the diffracted light beams316bare propagated toward the object312. The lens402and the diffractive plate403are stacked such that the lens402is disposed between a first end406and a second end408of the cavity404, and the diffractive plate is disposed at the second end408of the cavity404. The lens402and the diffractive plate403are spaced apart from one another a suitable distance. Moreover, the lens402is located proximate the laser source328such that pre-diffracted light beams316aemitted by the laser source328impinge the first lens402, and are propagated through the diffractive plate403, and diffracted light beams316bexit diffractive plate403and are propagated toward the object312to generate and/or form the illumination pattern. Suitably, the pre-diffracted light beams316aemitted by the laser source328are incident collimated laser beam rays. In the example shown, the laser source328may emit the light beams316afrom an edge adjacent the first end406of the hood326such that the light beams316atravel initially in a direction perpendicular or at an oblique angle to a longitudinal axis extending through the cavity404. The light beams316amay be diverted to travel toward the lens402by a diverting element410(e.g., a mirror). As described above for the diffractive optical element400a, the illumination pattern may depend on the design of the diffractive optical element400b. Each of the lens402and the diffractive plate403is selected to have a suitable size and shape for generating and/or forming the illumination pattern. For example, the lens402and diffractive plate403may be suitably sized and shaped to generate and/or form illumination patterns that include regular and/or constant and/or periodic patterns such as a triangular pattern, a rectangular pattern, a hexagonal pattern, or a pattern comprising further convex tilings. The illumination patterns may include as many features per area as possible such that a hexagonal pattern may be preferred. Example hexagonal patterns are illustrated inFIGS.47A-Cand described in further detail herein. A distance between two features of the respective illumination pattern and/or an area of the at least one illumination feature may depend on a circle of confusion in an image. Each of the lens402and the diffractive plate403may be selected from a focus-tunable lens; an aspheric lens; a spheric lens; a Fresnel lens; a concave lens, including a plano-concave and a biconcave lens; a convex lens, including a plano-convex and a biconvex lens; and a meniscus lens. FIG.52shows another example diffractive optical element400cused to generate and/or form the illumination pattern on the surface of the object312. The example diffractive optical element400cmay be used as the diffractive optical element340shown inFIG.49. In the example embodiment, the diffractive optical element400cincludes a single lens or refractive-diffractive element402. Like the stacked array of lenses402of the diffractive optical element400a(FIG.50), and the lens402and the diffractive plate403(FIG.51), the refractive-diffractive element402is disposed within a cavity404defined by a hood326of the projector311. The hood326is tubular and extends outward from the housing305of the projector311. The hood326is open at both ends to allow light beams316afrom the laser source328to impinge the refractive-diffractive element402, and the diffracted light beams316bare propagated toward the object312. The refractive-diffractive element402at the second end408of the cavity404, with the light source328being disposed at the first end406. As described above, light beams316aemitted by the laser source328are propagated through the cavity404and impinge the refractive-diffractive element402, and diffracted light beams316bexit the refractive-diffractive element402and are propagated toward the object312to generate and/or form the illumination pattern. Suitably, the pre-diffracted light beams316aemitted by the laser source328are incident collimated laser beam rays. As described above for the diffractive optical element400aand400b, the illumination pattern may depend on the design of the diffractive optical element400c. The configuration of the refractive-diffractive element402is selected to have a suitable size and shape for generating and/or forming the illumination pattern. For example, the refractive-diffractive element402may be suitably sized and shaped to generate and/or form illumination patterns that include regular and/or constant and/or periodic patterns such as a triangular pattern, a rectangular pattern, a hexagonal pattern, or a pattern comprising further convex tilings. The illumination patterns may include as many features per area as possible such that a hexagonal pattern may be preferred. Example hexagonal patterns are illustrated inFIGS.47A-Cand described in further detail herein. A distance between two features of the respective illumination pattern and/or an area of the at least one illumination feature may depend on a circle of confusion in an image. The illumination pattern generated and/or formed by the diffractive optical elements400a-cmay be wavelength dependent. Specifically, the illumination patterns generated and/or formed by the diffractive optical element400a-cmay be interference patterns which is strongly wavelength dependent. In some embodiments, the laser source328may be a tunable laser source328and the projector311may control at least one property of the tunable laser source328to generate changeable illumination patterns using one or multiple (e.g., three) wavelengths as described in U.S. Patent Applicant Publication No. 2022/0146250 A1, the disclosure of which is incorporated by reference herein. The projected illumination pattern may be a periodic point pattern. The projected illumination pattern may have a low point density. For example, the illumination pattern may comprise at least one periodic point pattern having a low point density, wherein the illumination pattern has ≤2500 points per field of view. In comparison with structured light having typically a point density of 10 k-30 k in a field of view of 55×38° the illumination pattern according to the present invention may be less dense. This may allow more power per point such that the proposed technique is less dependent on ambient light compared to structured light. The illumination features or dots318are spatially modulated. The illumination pattern, in particular the spatial arrangement of illumination features or dots318, may be designed with respect to a field of view of a sensor element, for example, optical sensor330. Specifically, the illumination features318are patterned illumination features318, wherein each of the patterned illumination features318comprises a plurality of sub-features, and/or the illumination features318are arranged in a periodic pattern equidistant in rows, wherein each of the rows of illumination features318have an offset, wherein the offset of neighboring rows differ. As shown inFIG.53A, the illumination features318may be arranged in a periodic pattern equidistant in rows. The distance between neighboring illumination features on a row may be d. Each of the rows of illumination features318may have an offset d, wherein the offset of neighboring rows differ. The offset d may be a spatial distance between neighboring rows. The sensor element330and the projector311ofFIG.49may be positioned such that the rows run parallel to epipolar lines362. The illumination pattern360may be selected such that two neighboring illumination features318have on an epipolar line362a suitable distance. The distance between two illumination features318may be such that it is possible to assign unambiguously two points on the epipolar line362via depth-from-photon-ratio technique. The suitable distance may depend on distance error of the depth-from-photon-ratio technique and/or from a basis line of the sensor element330and the projector311. The illumination features318may be arranged as follows. The illumination pattern360may be a grid that includes a number of rows on which the illumination features318are arranged in equidistant positions with distance d. The rows are orthogonal with respect to the epipolar lines362. A distance between the rows may be constant. A different offset may be applied to each of the rows in the same direction. The offset may result in that the illumination features of a row are shifted. The offset d may be d=a/b, wherein a and b are positive integer numbers such that the illumination pattern is a periodic pattern. For example, d may be 1/3 or 2/5. The so constructed illumination pattern124reveals a shifted grid in comparison to the initial regular rectangular pattern. The distance between features on the epipolar lines362for this grid arrangement is three times larger compared to the initial regular rectangular pattern. The offset and density of illumination features318may enhance robustness for solving the correspondence problem.FIG.53Bshows the illumination pattern360in the field of view of the sensor element330. By using the offset, the illumination features318can be arranged such that the illumination pattern360matches with the field of view of the sensor element330. The illumination features318may be patterned illumination features. Each of the patterned illumination features may comprise a plurality of sub-features. The sub-features belonging to the same illumination feature318may be shaped identical. For example, the illumination feature318may comprise a plurality of circles each having a center and a radius. The sub-features belonging to the same illumination feature318may be arranged at different spatial positions in the illumination pattern360. Specifically, the centers of the sub-features are arranged at different spatial positions in the illumination pattern360. The extension of the sub-features may be selected such that they are clearly distinguishable. For example, the patterned illumination feature318may be or may comprise a patterned light spot comprising a number of smaller light spots, or a cluster of few smaller light spots, packed densely forming a certain pattern. Rotated versions such as rotated by 45, 90 or 180 degrees of these patterned illumination features can be used as well. The chosen patterned illumination feature318may be replicated such as 1000 to 2000 times to form the illumination pattern360. In other words, the projected illumination pattern360may comprise e.g. 1000 to 2000 copies of the chosen patterned illumination feature318. For example, the projector311ofFIG.49includes the first illumination source328, in particular laser source328, configured for generating at least one light beam, also denoted laser beam. The projector311may include the at least one transfer device, in particular the DOE340, for diffracting and for replicating the laser beam generated by the single laser source for generating the illumination pattern360comprising the patterned illumination features. The diffractive optical element340may be configured for beam shaping and/or beam splitting. For example, the projector311may include at least one array of densely packed light sources, in particular laser sources328, according to a certain pattern configured for generating a cluster of light beams. The density of the laser sources328may depend on extension of a housing of the individual light sources and distinguishability of the light beams. The projector311may include the at least one transfer device, in particular the DOE340, for diffracting and replicating the cluster of light beams for generating the illumination pattern360comprising patterned illumination features. Referring back toFIG.49, the detector310includes the optical sensor330having at least one light sensitive area332. The optical sensor330is configured for determining at least one first image including at least one two dimensional image of the object312. The optical sensor330is configured for determining at least one second image including a plurality of reflection features generated by the object312in response to illumination by the illumination features. The detector310may include a single camera comprising the optical sensor330. The detector310may comprise a plurality of cameras each comprising an optical sensor330or a plurality of optical sensors330. The at least one first image may be or include at least one two dimensional image of the object312, where the two dimensional image includes information about transversal coordinates, but not longitudinal coordinates, such as the dimensions of height and width only. The at least one second image may be or include at least one three dimensional image of the object312, where the three dimensional image includes information about transversal coordinates and additionally about the longitudinal coordinate such as the dimensions of height, width and depth. The optical sensor330specifically may be or may include at least one photodetector, preferably inorganic photodetectors, more preferably inorganic semiconductor photodetectors, most preferably silicon photodetectors. Specifically, the optical sensor330may be sensitive in the infrared spectral range. All pixels of the matrix or at least a group of the optical sensors of the matrix specifically may be identical. Groups of identical pixels of the matrix specifically may be provided for different spectral ranges, or all pixels may be identical in terms of spectral sensitivity. Further, the pixels may be identical in size and/or with regard to their electronic or optoelectronic properties. Specifically, the optical sensor330may be or may include at least one inorganic photodiode which are sensitive in the infrared spectral range, preferably in the range of 700 nm to 3.0 micrometers. Specifically, the optical sensor330may be sensitive in the part of the near infrared region where silicon photodiodes are applicable specifically in the range of 700 nm to 1100 nm. Infrared optical sensors which may be used for optical sensors may be commercially available infrared optical sensors, such as infrared optical sensors commercially available under the brand name Hertzstueck™ from trinamiX™ GmbH, D-67056 Ludwigshafen am Rhein, Germany. Thus, as an example, the optical sensor330may include at least one optical sensor of an intrinsic photovoltaic type, more preferably at least one semiconductor photodiode selected from the group consisting of: a Ge photodiode, an InGaAs photodiode, an extended InGaAs photodiode, an InAs photodiode, an InSb photodiode, a HgCdTe photodiode. Additionally or alternatively, the optical sensor330may comprise at least one optical sensor of an extrinsic photovoltaic type, more preferably at least one semiconductor photodiode selected from the group consisting of: a Ge:Au photodiode, a Ge:Hg photodiode, a Ge:Cu photodiode, a Ge:Zn photodiode, a Si:Ga photodiode, a Si:As photodiode. Additionally or alternatively, the optical sensor330may comprise at least one photoconductive sensor such as a PbS or PbSe sensor, a bolometer, preferably a bolometer selected from the group consisting of a VO bolometer and an amorphous Si bolometer. The optical sensor330may be sensitive in one or more of the ultraviolet, the visible or the infrared spectral range. Specifically, the optical sensor may be sensitive in the visible spectral range from 500 nm to 780 nm, most preferably at 650 nm to 750 nm or at 690 nm to 700 nm. Specifically, the optical sensor330may be sensitive in the near infrared region. Specifically, the optical sensor330may be sensitive in the part of the near infrared region where silicon photodiodes are applicable specifically in the range of 700 nm to 1000 nm. The optical sensor330, specifically, may be sensitive in the infrared spectral range, specifically in the range of 780 nm to 3.0 micrometers. For example, the optical sensor330each, independently, may be or may include at least one element selected from the group consisting of a photodiode, a photocell, a photoconductor, a phototransistor or any combination thereof. For example, the optical sensor330may be or may include at least one element selected from the group consisting of a CCD sensor element, a CMOS sensor element, a photodiode, a photocell, a photoconductor, a phototransistor or any combination thereof. Any other type of photosensitive element may be used. The photosensitive element generally may fully or partially be made of inorganic materials and/or may fully or partially be made of organic materials. Most commonly, one or more photodiodes may be used, such as commercially available photodiodes, e.g. inorganic semiconductor photodiodes. The optical sensor330may comprise at least one sensor element334that includes a matrix of pixels. Thus, as an example, the optical sensor330may be part of or constitute a pixelated optical device. For example, the optical sensor330may be and/or may comprise at least one CCD and/or CMOS device. As an example, the optical sensor330may be part of or constitute at least one CCD and/or CMOS device having a matrix of pixels, each pixel forming a light-sensitive area. The sensor element334may be formed as a unitary, single device or as a combination of several devices. The matrix specifically may be or may comprise a rectangular matrix having one or more rows and one or more columns. The rows and columns specifically may be arranged in a rectangular fashion. However, other arrangements are feasible, such as non-rectangular arrangements. As an example, circular arrangements are also feasible, wherein the elements are arranged in concentric circles or ellipses about a center point. For example, the matrix may be a single row of pixels. Other arrangements are feasible. The pixels of the matrix specifically may be equal in one or more of size, sensitivity and other optical, electrical and mechanical properties. The light-sensitive areas332of all optical sensors330of the matrix specifically may be located in a common plane, the common plane preferably facing the object312, such that a light beam322or324propagating from the object312to the detector310may generate a light spot on the common plane. The light-sensitive area332may specifically be located on a surface of the respective optical sensor330. Other embodiments, however, are feasible. The optical sensor330may include for example, at least one CCD and/or CMOS device. As an example, the optical sensor330may be part of or constitute a pixelated optical device. As an example, the optical sensor330may be part of or constitute at least one CCD and/or CMOS device having a matrix of pixels, each pixel forming a light-sensitive area332. The optical sensor330is configured for determining at least one first image including a plurality of reflection features generated by the object312in response to illumination by the illumination features. The optical sensor330is configured for determining at least one second image including at least one two dimensional image of, or two dimension information associated with the object312. The image itself, thus, may comprise pixels, the pixels of the image correlating to pixels of the matrix of the sensor element334. Specifically, optical sensor330may determine the at least one first image and the at least one second image in response to an illumination of its respective light-sensitive area332by a light beam322and/or a light beam324propagating from the object312to the detector310. The light beams322may include reflected light beams322propagating from the dots318on the surface of the object312that are generated by the first illumination source328. The light beams324may include reflected light beams324propagating from the object312or the environment surrounding the object312that originate from the floodlight320projected by the second illumination source338. The optical sensor330may image, record and/or generate the at least one first image and/or the at least one second image. The first image and the second image may be data recorded by using the optical sensor330, such as a plurality of electronic readings from an imaging device, such as the pixels of the sensor element330. The first image and/or second image itself may comprise pixels, the pixels of the image correlating to pixels of the optical sensor330. The first image and the second image may be determined, in particular recorded, at different time points. Recording of the first image and the second time limit may be performed with a temporal shift. Specifically, a single camera comprising the optical sensor330may record with a temporal shift a two-dimensional image and an image of a projected pattern. Recording the first and the second image at different time points may ensure that an evaluation device346can distinguish between the first and the second image and can apply the appropriate evaluation routine. Moreover, it is possible to adapt the illumination situation for the first image if necessary and in particular independent from the illumination for the second image. The optical sensor330may be synchronized with the illumination cycle of the projector311. The system300may include at least one control unit347. The control unit347is configured for controlling the projector311and/or the optical sensor330, in particular by using at least one processor and/or at least one application specific integrated circuit. Thus, as an example, the control unit347may include at least one data processing device having a software code stored thereon comprising a number of computer commands. The control unit347may provide one or more hardware elements for performing one or more of the named operations and/or may provide one or more processors with software running thereon for performing one or more of the named operations. Thus, as an example, the control unit may comprise one or more programmable devices such as one or more computers, application-specific integrated circuits (ASICs), Digital Signal Processors (DSPs), or Field Programmable Gate Arrays (FPGAs) which are configured to perform the above-mentioned controlling. Additionally or alternatively, however, the control unit347may also fully or partially be embodied by hardware. The control unit347may be integrated within the evaluation device346. Alternatively, the control unit347may be separate from the evaluation device346and integrated in the housing305, for example. The control unit347may include at least one microcontroller. The control unit347may be configured for controlling the optical sensor330and/or the projector311. The control unit347may be configured for triggering projecting of the illumination pattern and/or imaging of the second image. Specifically, the control unit347may be configured for controlling the optical sensor330, in particular frame rate and/or illumination time, via trigger signals. The control unit347may be configured for adapting and/or adjusting the illumination time from frame to frame. This may allow adapting and/or adjusting illumination time for the first image, e.g. in order to have contrasts at the edges, and at the same time adapting and/or adjusting illumination time for the second image to maintain contrast of the reflection features. Additionally, the control unit347may, at the same time and independently, control the elements of the first illumination source328and/or the second illumination source338. Specifically, the control unit347may be configured for adapting exposure time for projection of the illumination pattern. The second image may be recorded with different illumination times. Dark regions of the object312, or the environment surrounding the object312, may require more light in comparison to lighter regions, which may result to run into saturation for the lighter regions. Therefore, the detector310may be configured for recording a plurality of images of the reflection pattern, wherein the images may be recorded with different illumination times. The detector310may be configured for generating and/or composing the second image from said images. The evaluation device346may be configured for performing at least one algorithm on said images which were recorded with different illumination times. The control unit347may be configured for controlling the first illumination source328and the second illumination source338. The control unit347may be configured for triggering illumination of the object312by light generated by the second illumination source338and imaging of the first image. The control unit347may be configured for adapting exposure time for projection of the illumination pattern by the first illumination source328and illumination by light generated by the second illumination source338. The control unit347may also be configured for controlling the illumination cycle of the projector311. The control unit347may facilitate synchronization between the illumination cycle of the projector311and the optical sensor330. The control unit347may transmit a signal to each of the projector311and the optical sensor330. The signal transmitted to the projector311may cause the projector311to cycle between the first illumination source328and the second illumination source338. The signal transmitted to the optical sensor330may indicate the stage in the illumination cycle and, specifically, the source of illumination being projected onto the object312. The optical sensor330may be active, i.e., in a suitable mode for capturing images and/or detecting light, during each illumination stage of the illumination cycle. The system300may include at least one first filter element (not shown) configured for transmitting light in the infrared spectral range and for at least partially blocking light of other spectral ranges. The first filter element may be a monochromatic bandpass filter configured for transmitting light in a small spectral range. For example, the spectral range or bandwidth may be ±100 nm, preferably ±50 nm, most preferably ±35 nm or even less. For example, the first filter element may be configured for transmitting light having a central wavelength of 808 nm, 830 nm, 850 nm, 905 nm or 940 nm. For example, the first filter element may be configured for transmitting light having a central wavelength of 850 nm with a bandwidth of 70 nm or less. The first filter element may have a minimal angle dependency such that the spectral range can be small. This may result in a low dependency on ambient light, wherein at the same time an enhanced vignetting effect can be prevented. For example, the detector310may comprise the single camera having the optical sensor330and, in addition, the first filter element. The first filter element may ensure that even in presence of ambient light recording of the reflection pattern is possible and at the same time to maintain laser output power low such that eye safety operation in laser class1is ensured. Additionally or alternatively, the system300may include at least one second filter element (not shown). The second filter element may be a band-pass filter. For example, the first filter element may be a long pass filter configured for blocking visual light and for let pass light above a wavelength of 780 nm. The band pass filter may be positioned between the light-sensitive area332, for example of a CMOS chip, and a transfer device344. The spectrum of the first illumination source328and/or of the second illumination source338may be selected depending on the used filter elements. For example, in case of the first filter element having a central wavelength of 850 nm, the first illumination source328may include at least one light source generating a wavelength of 850 nm such as at least one infrared (IR)-LED. The detector310may include at least one transfer device344that includes one or more of: at least one lens, for example at least one lens selected from the group consisting of at least one focus-tunable lens, at least one aspheric lens, at least one spheric lens, at least one Fresnel lens; at least one diffractive optical element; at least one concave mirror; at least one beam deflection element, preferably at least one mirror; at least one beam splitting element, preferably at least one of a beam splitting cube or a beam splitting mirror; at least one multi-lens system. In particular, the transfer device344may include at least one collimating lens adapted to focus at least one object point in an image plane. The system300also includes the evaluation device346that is communicatively coupled to the optical sensor330and/or the projector311via a connector354. The evaluation device346may be a computing device346that includes at least one processor348in communication with at least one memory350and at least one database352. The evaluation device346may also include the control unit347. The database352may store data associated with image analysis and/or image processing, such as, for example, data for material detection and/or image recognition or authentication of the object312, which will be described in further detail herein. The memory350may store instructions that are executable by the processor348to enable the evaluation device to perform its intended function. The processor348may, for example, include one or more processing units (e.g., in a multi-core configuration) for executing instructions. The processor348may include any programmable system including systems using micro-controllers, reduced instruction set circuits (RISC), application specific integrated circuits (ASICs), logic circuits, and any other circuits or processor capable of executing the functions described herein. The memory350may, for example, be any device allowing information such as executable instructions to be stored and retrieved. The memory350may further include one or more computer readable media. The evaluation device346is configured for evaluating the first image and the second image. The evaluation of the first image may include generating a two-dimensional image of at least a portion of the object312. The evaluation of the second image may include evaluating the two dimensional image of, or the two dimensional information associated with, the object312, comparing the two dimensional images and/or information to data stored in a database (e.g., database352), and/or authenticating at least a portion of the object312. As described above, the optical sensor330is configured for determining the at least one first image including a plurality of reflection features generated by the object312in response to illumination by the illumination features. Each reflection feature may be or include a feature in an image plane generated by the object312in response to illumination, specifically with at least one illumination feature. The evaluation device346may then evaluate the first image based on the reflection features. Each of the reflection features includes at least one beam profile. The beam profile may be selected from the group consisting of a trapezoid beam profile; a triangle beam profile; a conical beam profile and a linear combination of Gaussian beam profiles. The evaluation device346is configured for determining beam profile information for each of the reflection features by analysis of their beam profiles. The determining the beam profile may comprise identifying at least one reflection feature provided by the optical sensor330and/or selecting at least one reflection feature provided by the optical sensor330and evaluating at least one intensity distribution of the reflection feature. As an example, a region of the image may be used and evaluated for determining the intensity distribution, such as a three-dimensional intensity distribution or a two-dimensional intensity distribution, such as along an axis or line through the image. As an example, a center of illumination by the light beam322and/or324may be determined, such as by determining the at least one pixel having the highest illumination, and a cross-sectional axis may be chosen through the center of illumination. The intensity distribution may an intensity distribution as a function of a coordinate along this cross-sectional axis through the center of illumination. Other evaluation algorithms are feasible. The evaluation device346may be configured for performing at least one image analysis and/or image processing in order to identify the reflection features. The image analysis and/or image processing may use at least one feature detection algorithm. The image analysis and/or image processing may include one or more of the following: a filtering; a selection of at least one region of interest; a formation of a difference image between an image created by the sensor signals and at least one offset; an inversion of sensor signals by inverting an image created by the sensor signals; a formation of a difference image between an image created by the sensor signals at different times; a background correction; a decomposition into color channels; a decomposition into hue; saturation; and brightness channels; a frequency decomposition; a singular value decomposition; applying a blob detector; applying a corner detector; applying a Determinant of Hessian filter; applying a principle curvature-based region detector; applying a maximally stable extremal regions detector; applying a generalized Hough-transformation; applying a ridge detector; applying an affine invariant feature detector; applying an affine-adapted interest point operator; applying a Harris affine region detector; applying a Hessian affine region detector; applying a scale-invariant feature transform; applying a scale-space extrema detector; applying a local feature detector; applying speeded up robust features algorithm; applying a gradient location and orientation histogram algorithm; applying a histogram of oriented gradients descriptor; applying a Deriche edge detector; applying a differential edge detector; applying a spatio-temporal interest point detector; applying a Moravec corner detector; applying a Canny edge detector; applying a Laplacian of Gaussian filter; applying a Difference of Gaussian filter; applying a Sobel operator; applying a Laplace operator; applying a Scharr operator; applying a Prewitt operator; applying a Roberts operator; applying a Kirsch operator; applying a high-pass filter; applying a low-pass filter; applying a Fourier transformation; applying a Radon-transformation; applying a Hough-transformation; applying a wavelet-transformation; a thresholding; creating a binary image. The region of interest may be determined manually by a user or may be determined automatically, such as by recognizing an object within the image generated by the optical sensor330. For example, the first illumination source328may be configured for generating and/or projecting the cloud of dots318such that a plurality of illuminated regions is generated on the optical sensor, for example the CMOS detector. Additionally, disturbances may be present on the optical sensor330such as disturbances due to speckles and/or extraneous light and/or multiple reflections. The evaluation device346may be adapted to determine at least one region of interest, for example one or more pixels illuminated by the one or more light beams322and/or light beams324. The region of interest may optionally be used for determination of a longitudinal coordinate of the object312. For example, the evaluation device346may be adapted to perform a filtering method, for example, a blob-analysis and/or an edge filter and/or object recognition method. The evaluation device346may be configured for performing at least one image correction. The image correction may comprise at least one background subtraction. The evaluation device346may be adapted to remove influences from background light from the reflection beam profile, for example, by an imaging without further illumination. The analysis of the beam profile may include evaluating of the beam profile. The analysis of the beam profile may comprise at least one mathematical operation and/or at least one comparison and/or at least symmetrizing and/or at least one filtering and/or at least one normalizing. For example, the analysis of the beam profile may comprise at least one of a histogram analysis step, a calculation of a difference measure, application of a neural network, application of a machine learning algorithm. The evaluation device346may be configured for symmetrizing and/or for normalizing and/or for filtering the beam profile, in particular to remove noise or asymmetries from recording under larger angles, recording edges or the like. The evaluation device346may filter the beam profile by removing high spatial frequencies such as by spatial frequency analysis and/or median filtering or the like. Summarization may be performed by center of intensity of the light spot and averaging all intensities at the same distance to the center. The evaluation device346may be configured for normalizing the beam profile to a maximum intensity, in particular to account for intensity differences due to the recorded distance. The evaluation device346may be configured for removing influences from background light from the reflection beam profile, for example, by an imaging without illumination. The reflection feature may cover or may extend over at least one pixel of the image. For example, the reflection feature may cover or may extend over plurality of pixels. The evaluation device346may be configured for determining and/or for selecting all pixels connected to and/or belonging to the reflection feature, e.g. a light spot. The evaluation device346may be configured for determining the center of intensity by Rcoi=1l·∑j·rpixel,wherein Rcoiis a position of center of intensity, rpixelis the pixel position and l=ΣjItotalwith j being the number of pixels j connected to and/or belonging to the reflection feature and Itotaltotal being the total intensity. The evaluation device346is configured for determining the beam profile information for each of the reflection features by analysis of their beam profiles. The beam profile information may include information about at least one geometrical feature (e.g., a shape or a contour) of the object312. Additionally, the beam profile information may include information about a material property of said surface point or region having reflected the illumination feature. For example, the beam profile information may include information about the skin of a human object312, such as a human face. The beam profile information may optionally also include information about the longitudinal coordinate of the surface point or region having reflected the illumination feature. The analysis of the beam profile of one of the reflection features may comprise determining at least one first area and at least one second area of the beam profile. The first area of the beam profile may be an area A1and the second area of the beam profile may be an area A2. The evaluation device346may be configured for integrating the first area and the second area. The evaluation device346may be configured to derive a combined signal, in particular a quotient Q, by one or more of dividing the integrated first area and the integrated second area, dividing multiples of the integrated first area and the integrated second area, dividing linear combinations of the integrated first area and the integrated second area. The evaluation device346may configured for determining at least two areas of the beam profile and/or to segment the beam profile in at least two segments comprising different areas of the beam profile, wherein overlapping of the areas may be possible as long as the areas are not congruent. For example, the evaluation device346may be configured for determining a plurality of areas such as two, three, four, five, or up to ten areas. The evaluation device346may be configured for segmenting the light spot into at least two areas of the beam profile and/or to segment the beam profile in at least two segments comprising different areas of the beam profile. The evaluation device346may be configured for determining for at least two of the areas an integral of the beam profile over the respective area. The evaluation device346may be configured for comparing at least two of the determined integrals. Specifically, the evaluation device346may be configured for determining at least one first area and at least one second area of the reflection beam profile. The first area of the beam profile and the second area of the reflection beam profile may be one or both of adjacent or overlapping regions. The first area of the beam profile and the second area of the beam profile may be not congruent in area. For example, the evaluation device346may be configured for dividing a sensor region of the CMOS sensor into at least two sub-regions, wherein the evaluation device may be configured for dividing the sensor region of the CMOS sensor into at least one left part and at least one right part and/or at least one upper part and at least one lower part and/or at least one inner and at least one outer part. Additionally or alternatively, the detector310may comprise at least two optical sensors330, wherein the light-sensitive areas332of a first optical sensor and of a second optical sensor may be arranged such that the first optical sensor is adapted to determine the first area of the reflection beam profile of the reflection feature and that the second optical sensor is adapted to determine the second area of the reflection beam profile of the reflection feature. The evaluation device346may be adapted to integrate the first area and the second area. The evaluation device346may be configured for using at least one predetermined relationship between the quotient Q and the longitudinal coordinate for determining the longitudinal coordinate. The predetermined relationship may be one or more of an empiric relationship, a semi-empiric relationship and an analytically derived relationship. The evaluation device346may comprise at least one data storage device for storing the predetermined relationship, such as a lookup list or a lookup table, which may be stored in database352. The first area of the beam profile may include essentially edge information of the beam profile and the second area of the beam profile comprises essentially center information of the beam profile, and/or the first area of the beam profile may comprise essentially information about a left part of the beam profile and the second area of the beam profile comprises essentially information about a right part of the beam profile. The beam profile may have a center, i.e. a maximum value of the beam profile and/or a center point of a plateau of the beam profile and/or a geometrical center of the light spot, and falling edges extending from the center. The second region may comprise inner regions of the cross section and the first region may comprise outer regions of the cross section. Preferably, the center information has a proportion of edge information of less than 10%, more preferably of less than 5%, most preferably the center information comprises no edge content. The edge information may comprise information of the whole beam profile, in particular from center and edge regions. The edge information may have a proportion of center information of less than 10%, preferably of less than 5%, more preferably the edge information comprises no center content. At least one area of the beam profile may be determined and/or selected as second area of the beam profile if it is close or around the center and comprises essentially center information. At least one area of the beam profile may be determined and/or selected as first area of the beam profile if it comprises at least parts of the falling edges of the cross section. For example, the whole area of the cross section may be determined as first region. Other selections of the first area A1and second area A2may be feasible. For example, the first area may comprise essentially outer regions of the beam profile and the second area may comprise essentially inner regions of the beam profile. For example, in case of a two-dimensional beam profile, the beam profile may be divided in a left part and a right part, wherein the first area may comprise essentially areas of the left part of the beam profile and the second area may comprise essentially areas of the right part of the beam profile. The evaluation device346may be configured to derive the quotient Q by one or more of dividing the first area and the second area, dividing multiples of the first area and the second area, dividing linear combinations of the first area and the second area. The evaluation device124may be configured for deriving the quotient Q by Q=∫∫A⁢1E⁡(x,y)⁢dxdy∫∫A⁢2E⁡(x,y)⁢dxdywherein x and y are transversal coordinates, A1and A2are the first and second area of the beam profile, respectively, and E(x,y) denotes the beam profile. The evaluation device346may be configured for determining at least one three-dimensional image and/or 3D-data using the determined beam profile information. The image or images recorded by the camera including the reflection pattern may be used to determine the three-dimensional image. As outlined above, the evaluation device346is configured for determining at least one geometrical feature of the object312based on the reflection features. The evaluation device346may optionally be configured for determining for each of the reflection features a longitudinal coordinate. The evaluation device346may be configured for generating 3D-data and/or the three-dimensional image by merging the reflection features of the first image. The evaluation device346may optionally be configured to merge the reflection features with the determined longitudinal coordinate of the respective reflection feature. The evaluation device346may be configured for merging and/or fusing the determined 3D-data and/or the three-dimensional image and the information determined from the first image, i.e., the at least one geometrical feature and/or a material property of the object312and, optionally, its location, in order to identify the object312in a scene, in particular in the environment surrounding the object312. The evaluation device346may be configured for identifying the reflection features which are located inside an image region the geometrical feature and/or for identifying the reflection features which are located outside the image region of the geometrical feature. The evaluation device346may be configured for determining an image position of the identified geometrical feature in the first image. The image position may be defined by pixel coordinates, e.g. x and y coordinates, of pixels of the geometrical feature. The evaluation device346may be configured for determining and/or assigning and/or selecting at least one border and/or limit of the geometrical feature in the first image. The border and/or limit may be given by at least one edge or at least one contours of the geometrical feature. The evaluation device346may be configured for determining the pixels of the first image inside the border and/or limit and their image position in the first image. The evaluation device346may be configured for determining at least one image region of the second image corresponding to the geometrical feature in the first image by identifying the pixels of the second image corresponding to the pixels of the first image inside the border and/or limit of the geometrical feature. The evaluation device346is configured for determining the at least one depth level from the beam profile information of the reflection features located inside and/or outside of the image region of the geometrical feature. The object312may include a plurality of elements at different depth levels. For example, in some instances, the object312is a face and includes various features (eyes, nose, etc.) are varying depth levels. The depth level may be a bin or step of a depth map of the pixels of the second image. As outlined above, the evaluation device346may be configured for determining for each of the reflection features a longitudinal coordinate from their beam profiles. The evaluation device346may be configured for determining the depth levels from the longitudinal coordinates of the reflection features located inside and/or outside of the image region of the geometrical feature. The evaluation device346is configured for determining features of the object312by considering the depth level and pre-determined or predefined information about shape, contours, and/or size of the object312. For example, the information about shape and/or size may be entered by a user, or may be collected over time and stored in database352. For example, the information about shape, contours, and size of an object312may be measured in an additional measurement. As outlined above, the evaluation device346is configured for determining the depth level of features of the object312. If in addition, the shape, contour, and/or size of the object312are known the evaluation device346can use this information to authenticate the object312. The optical sensor330may determine the two dimensional image from the second image and a resulting 3d depth map from the first image. The depth map may estimate features of the object312. The depth map can also be distorted by different effects like to reflectance of skin, for example, and/or the 3d depth map may be too sparse. The evaluation device may be configured to determine at least one material property which may be used to correct two dimensional image data and/or the three-dimensional image by image processing algorithms. In some examples, a task may be to authenticate the object312. In particular the evaluation device346may be configured to authenticate a face of a human312. The evaluation device346identifies or determines one or more geometrical features (e.g., eyes, nose of the face) based on the first image and identifies or determines one or more two dimensional images based on the second image. The evaluation device346may also determine one or more material properties (e.g., skin, hair) as described below. The facial image of the object312is divided into multiple patches based on 2D image analysis. Each of the patches are input into an image processing algorithm, such as a neural network or a machine learning algorithm, which performs a comparison of the 2D images with stored data related to authentication of the object312. In some embodiments, authentication is performed based on the 2D image analysis alone. Authentication may also utilize the geometrical features and/or the material properties determined based on the first image. For example, the evaluation device346may include the at least one database352including a list and/or table including the geometrical features and material properties associated with the object312. Authentication of the object may thereby be performed based on a output of the comparison. The determination or detection of one or more material properties of the object312and/or one or more geometrical features of the object312may be an additional security feature to identify and prevent spoof-attacks. In some situations, authentication based on 2D image analysis may be insufficient as a two dimensional image of an object312(e.g., a human or a more elaborate mask) could in theory result in an inaccurate authentication (e.g., a false positive or a false negative). The reflection features may used to identify a material property (e.g., biological material such as skin). The geometrical features (e.g., depth information) may be used to make a plausibility check, if object312is at a suitable distance from the detector310. In this regard, reflection features, depth information, and/or material properties of the object312may be used to perform authentication tasks in addition to the two dimensional image analysis. For example, a material profile (feature vector) for a specific object312(e.g., a specific human) may be employed to facilitate authenticating the object312. The evaluation device may be configured for determining at least one material property m of the object312by evaluating the beam profile of at least one of the reflection features, preferably beam profiles of a plurality of reflection features. With respect to details of determining at least one material property by evaluating the beam profile reference is made to US 2022/0157044 A1 and WO 2022/101429 A1, the full content of each of which is incorporated herein by reference. The term “material property” refers to at least one arbitrary property of the material configured for characterizing and/or identification and/or classification of the material. For example, the material property may be a property selected from the group consisting of: roughness, penetration depth of light into the material, a property characterizing the material as biological or non-biological material, a reflectivity, a specular reflectivity, a diffuse reflectivity, a surface property, a measure for translucence, a scattering, specifically a back-scattering behavior or the like. The at least one material property may be a property selected from the group consisting of: a scattering coefficient, a translucency, a transparency, a deviation from a Lambertian surface reflection, a speckle, and the like. Determining at least one material property refers to at least one or more of determining and assigning the material property to the object. The evaluation device346may include the at least one database352that includes a list and/or table, such as a lookup list or a lookup table, of predefined and/or predetermined material properties. The list and/or table of material properties may be determined and/or generated by performing at least one test measurement using the system300, for example by performing material tests using samples having known material properties. The list and/or table of material properties may be determined and/or generated at the manufacturer site and/or by the user of the system300. The material property may additionally be assigned to a material classifier such as one or more of a material name, a material group such as biological or non-biological material, translucent or non-translucent materials, metal or non-metal, skin or non-skin, fur or non-fur, carpet or non-carpet, reflective or non-reflective, specular reflective or non-specular reflective, foam or non-foam, hair or non-hair, roughness groups or the like. The evaluation device346may include the at least one database352including a list and/or table including the material properties and associated material name and/or material group. For example, without wishing to be bound by this theory, human skin may have a reflection profile, also denoted back scattering profile, comprising parts generated by back reflection of the surface, denoted as surface reflection, and parts generated by very diffuse reflection from light penetrating the skin, denoted as diffuse part of the back reflection. With respect to reflection profile of human skin reference is made to “Lasertechnik in der Medizin: Grundlagen, Systeme, Anwendungen”, “Wirkung von Laserstrahlung auf Gewebe”, 1991, pages 10 171 to 266, Jurgen Eichler, Theo Seiler, Springer Verlag, ISBN 0939-0979. The surface reflection of the skin may increase with the wavelength increasing towards the near infrared. Further, the penetration depth may increase with increasing wavelength from visible to near infrared. The diffuse part of the back reflection may increase with penetrating depth of the light. These properties may be used to distinguish skin from other materials, by analyzing the back scattering profile. Specifically, the evaluation device346may be configured for comparing the beam profile of the reflection feature, also denoted reflection beam profile, with at least one predetermined and/or prerecorded and/or predefined beam profile. The predetermined and/or prerecorded and/or predefined beam profile may be stored in a table or a lookup table and may be determined e.g. empirically, and may, as an example, be stored in at least one data storage device of the display device. For example, the predetermined and/or prerecorded and/or predefined beam profile may be determined during initial start-up of a device embodying the system300. For example, the predetermined and/or prerecorded and/or predefined beam profile may be stored in at least one data storage device, e.g. by software. The reflection feature may be identified as to be generated by biological tissue in case the reflection beam profile and the predetermined and/or prerecorded and/or predefined beam profile are identical. The comparison may comprise overlaying the reflection beam profile and the predetermined or predefined beam profile such that their centers of intensity match. The comparison may comprise determining a deviation, e.g. a sum of squared point to point distances, between the reflection beam profile and the predetermined and/or prerecorded and/or predefined beam profile. The evaluation device346may be configured for comparing the determined deviation with at least one threshold, wherein in case the determined deviation is below and/or equal the threshold the surface is indicated as biological tissue and/or the detection of biological tissue is confirmed. The threshold value may be stored in a table or a lookup table and may be determined e.g. empirically and may, as an example, be stored in at least one data storage device. Additionally or alternatively, for identification if the reflection feature was generated by biological tissue, the evaluation device may be configured for applying at least one image filter to the image of the area. As further used herein, the term “image” refers to a two-dimensional function, f(x,y), wherein brightness and/or color values are given for any x,y-position in the image. The position may be discretized corresponding to the recording pixels. The brightness and/or color may be discretized corresponding to a bit-depth of the optical sensor. As used herein, the term “image filter” refers to at least one mathematical operation applied to the beam profile and/or to the at least one specific region of the beam profile. Specifically, the image filter ϕ maps an image f, or a region of interest in the image, onto a real number, ϕ(f(x,y))=φ, wherein φ denotes a feature, in particular a material feature. Images may be subject to noise and the same holds true for features. Therefore, features may be random variables. The features may be normally distributed. If features are not normally distributed, they may be transformed to be normally distributed such as by a Box-Cox-Transformation. The evaluation device may be configured for determining at least one material feature φ2mapplying at least one material dependent image filter ϕ2to the image. As used herein, the term “material dependent” image filter refers to an image having a material dependent output. The output of the material dependent image filter is denoted herein “material feature φ2m” or “material dependent feature φ2m”. The material feature may be or may comprise at least one information about the at least one material property of the surface of the area having generated the reflection feature. The material dependent image filter may be at least one filter selected from the group consisting of: a luminance filter; a spot shape filter; a squared norm gradient; a standard deviation; a smoothness filter such as a Gaussian filter or median filter; a grey-level-occurrence-based contrast filter; a grey-level-occurrence-based energy filter; a grey-level-occurrence-based homogeneity filter; a grey-level-occurrence-based dissimilarity filter; a Law's energy filter; a threshold area filter; or a linear combination thereof; or a further material dependent image filter ϕ2otherwhich correlates to one or more of the luminance filter, the spot shape filter, the squared norm gradient, the standard deviation, the smoothness filter, the grey-level-occurrence-based energy filter, the grey-level-occurrence-based homogeneity filter, the grey-level-occurrence-based dissimilarity filter, the Law's energy filter, or the threshold area filter, or a linear combination thereof by |ρϕ2other,ϕm|≥0.40 with ϕmbeing one of the luminance filter, the spot shape filter, the squared norm gradient, the standard deviation, the smoothness filter, the grey-level-occurrence-based energy filter, the grey-level-occurrence-based homogeneity filter, the grey-level-occurrence-based dissimilarity filter, the Law's energy filter, or the threshold area filter, or a linear combination thereof. The further material dependent image filter ϕ2othermay correlate to one or more of the material dependent image filters by |ρϕ2other,ϕm|≥0.60, preferably by |ρϕ2other,ϕm|≥0.80. The material dependent image filter may be at least one arbitrary filter ϕ that passes a hypothesis testing. As used herein, the term “passes a hypothesis testing” refers to the fact that a Null-hypothesis H0is rejected and an alternative hypothesis H1is accepted. The hypothesis testing may comprise testing the material dependency of the image filter by applying the image filter to a predefined data set. The data set may comprise a plurality of beam profile images. As used herein, the term “beam profile image” refers to a sum of NBGaussian radial basis functions, ƒk(x,y)=|Σl=0NB−1glk(x,y)|, glk(x,y)=aike−(α(x-xlk))2e−(α(y-ylk))2wherein each of the NBGaussian radial basis functions is defined by a center (xlk, ylk), a prefactor, alk, and an exponential factor α=1/ε. The exponential factor is identical for all Gaussian functions in all images. The center-positions, (xlk, ylk), are identical for all images fk:(x0, x1, . . . , xNB-1), (y0, y1, . . . , yNB-1). Each of the beam profile images in the dataset may correspond to a material classifier and a distance. The material classifier may be a label such as ‘Material A’, ‘Material B’, etc. The beam profile images may be generated by using the above formula for fk(x,y) in combination with the following parameter table: ImageMaterial classifier,IndexMaterial IndexDistance zParametersk = 0Skin, m = 00.4 m(α00, α10, . . . , αNB−10)k = 1Skin, m = 00.6 m(α01, α11, . . . , αNB−11)k = 2Fabric, m = 10.6 m(α02, α12, . . . , αNB−12)......k = NMaterial J, m =(α0N, α1N, . . . , αNB−1N)J − 1 The values for x, y, are integers corresponding to pixels with (xy)∈[0,1,…,31]2. The images may have a pixel size of 32×32. The dataset of beam profile images may be generated by using the above formula for fkin combination with a parameter set to obtain a continuous description of fk. The values for each pixel in the 32×32-image may be obtained by inserting integer values from 0, . . . , 31 for x, y, in fk(x,y). For example, for pixel (6,9), the value fk(6,9) may be computed. Subsequently, for each image fk, the feature value φkcorresponding to the filter ϕ may be calculated, ϕ(fk(x,y),zk)=φk, wherein zkis a distance value corresponding to the image fkfrom the predefined data set. This yields a dataset with corresponding generated feature values φk. The hypothesis testing may use a Null-hypothesis that the filter does not distinguish between material classifier. The Null-Hypothesis may be given by H0: μ1=μ2= . . . =μJ, wherein μmis the the expectation value of each material-group corresponding to the feature values φk. Index m denotes the material group. The hypothesis testing may use as alternative hypothesis that the filter does distinguish between at least two material classifiers. The alternative hypothesis may be given by H1: ∃m, m′: μm≠μm′. As used herein, the term “not distinguish between material classifiers” refers to that the expectation values of the material classifiers are identical. As used herein, the term “distinguishes material classifiers” refers to that at least two expectation values of the material classifiers differ. As used herein “distinguishes at least two material classifiers” is used synonymous to “suitable material classifier”. The hypothesis testing may comprise at least one analysis of variance (ANOVA) on the generated feature values. In particular, the hypothesis testing may comprise determining a mean-value of the feature values for each of the J materials, i.e. in total J mean values, φ¯=∑iφi,mNm,for⁢⁢m∈[0,1,…,J-1], wherein Nmgives the number of feature values for each of the J materials in the predefined data set. The hypothesis testing may comprise determining a mean value of all N feature values φ¯=∑m∑iφi,mN. The hypothesis testing may comprise determining a Mean Sum Squares within: mssw=(ΣmΣi(φi,m−φm)2)/(N−J) The hypothesis testing may comprise determining a Mean Sum of Squares between, mssb=(Σm(φm−φ)2Nm)/(J−1) The hypothesis testing may comprise performing an F-Test: CDF⁡(x)=Idz⁢xdz⁢x+d2(d12,d22),where⁢d1=N-J,d2=J-1,F⁡(x)=1-CDF⁡(x)p=F⁡(mssb/mssw) Herein, Ixis the regularized incomplete Beta-Function, Ix(a,b)=B⁡(x;a,b)B⁡(a,b), with the Euler Beta-Function B(a,b)=∫01ta-1(1−t)b-1dt and B(x; a,b)=∫0xta-1(1−t)b-1dt being the incomplete Beta-Function. The image filter may pass the hypothesis testing if a p-value, p, is smaller or equal than a pre-defined level of significance. The filter may pass the hypothesis testing if p≤0.075, preferably p≤0.05, more preferably p≤0.025, and most preferably p≤0.01. For example, in case the pre-defined level of significance is α=0.075, the image filter may pass the hypothesis testing if the p-value is smaller than α=0.075. In this case the Null-hypothesis H0can be rejected and the alternative hypothesis H1can be accepted. The image filter thus distinguishes at least two material classifiers. Thus, the image filter passes the hypothesis testing. Image filters are described assuming that the reflection image comprises at least one reflection feature, in particular a spot image. A spot image f may be given by a function ƒ:R2→R≥0, wherein the background of the image f may be already subtracted. However, other reflection features may be possible. For example, the material dependent image filter may be a luminance filter. The luminance filter may return a luminance measure of a spot as material feature. The material feature may be determined by φ2⁢m=Φ⁡(f,z)=-∫f⁡(x)⁢dx⁢z2dray·n, where f is the spot image. The distance of the spot is denoted by z, where z may be obtained for example by using a depth-from-defocus or depth-from-photon ratio technique and/or by using a triangulation technique. The surface normal of the material is given by n∈R3and can be obtained as the normal of the surface spanned by at least three measured points. The vector dray∈R3is the direction vector of the light source. Since the position of the spot is known by using a depth-from-defocus or depth-from-photon ratio technique and/or by using a triangulation technique wherein the position of the light source is known as a parameter of the display device, dray, is the difference vector between spot and light source positions. For example, the material dependent image filter may be a filter having an output dependent on a spot shape. This material dependent image filter may return a value which correlates to the translucence of a material as material feature. The translucence of materials influences the shape of the spots. The material feature may be given by φ2⁢m=Φ⁡(f)=∫H⁡(f⁡(x)-α⁢h)⁢dx∫H⁡(f⁡(x)-β⁢h)⁢dx,wherein 0<α, β<1 are weights for the spot height h, and H denotes the Heavyside function, i.e. H(x)=1: x≥0, H(x)=0: x<0. The spot height h may be determined by h=∫Brƒ(x)dx,where Bris an inner circle of a spot with radius r. For example, the material dependent image filter may be a squared norm gradient. This material dependent image filter may return a value which correlates to a measure of soft and hard transitions and/or roughness of a spot as material feature. The material feature may be defined by φ2m=Φ(ƒ)=ƒ∥∇ƒ(x)∥2dx. For example, the material dependent image filter may be a standard deviation. The standard deviation of the spot may be determined by φ2m=Φ(ƒ)=∫(ƒ(x)−μ)2dx,Wherein μ is the mean value given by μ=∫(ƒ(x))dx. For example, the material dependent image filter may be a smoothness filter such as a Gaussian filter or median filter. In one embodiment of the smoothness filter, this image filter may refer to the observation that volume scattering exhibits less speckle contrast compared to diffuse scattering materials. This image filter may quantify the smoothness of the spot corresponding to speckle contrast as material feature. The material feature may be determined by φ2⁢m=Φ⁡(f,z)=∫❘"\[LeftBracketingBar]"ℱ⁡(f)⁢(x)-f⁡(x)❘"\[RightBracketingBar]"⁢dx∫f⁡(x)⁢dx·1z,wherein F is a smoothness function, for example a median filter or Gaussian filter. This image filter may comprise dividing by the distance z, as described in the formula above. The distance z may be determined for example using a depth-from-defocus or depth-from-photon ratio technique and/or by using a triangulation technique. This may allow the filter to be insensitive to distance. In one embodiment of the smoothness filter, the smoothness filter may be based on the standard deviation of an extracted speckle noise pattern. A speckle noise pattern N can be described in an empirical way by ƒ(x)=ƒ0(x)·(N(X)+1),where f0is an image of a despeckled spot. N(X) is the noise term that models the speckle pattern. The computation of a despeckled image may be difficult. Thus, the despeckled image may be approximated with a smoothed version of f, i.e. ƒ0≈F(ƒ) wherein F is a smoothness operator like a Gaussian filter or median filter. Thus, an approximation of the speckle pattern may be given by N⁡(X)=f⁡(x)ℱ⁡(f⁡(x))-1. The material feature of this filter may be determined by φ2⁢m=Φ⁡(f)=Var⁢(fℱ⁡(f)-1), Wherein Var denotes the variance function. For example, the image filter may be a grey-level-occurrence-based contrast filter. This material filter may be based on the grey level occurrence matrix Mƒ,ρ(g1g2=[pg1g2], whereas pg1g2is the occurrence rate of the grey combination (g1g2)=[ƒ(x1,y1), ƒ(x2,y2)], and the relation ρ defines the distance between (x1,y1) and (x2,y2), which is ρ(x,y)=(x+a,y+b) with a and b selected from 0.1. The material feature of the grey-level-occurrence-based contrast filter may be given by φ2⁢m=Φ⁡(f)=∑i,j=0N-1pij(i-j)2. For example, the image filter may be a grey-level-occurrence-based energy filter. This material filter is based on the grey level occurrence matrix defined above. The material feature of the grey-level-occurrence-based energy filter may be given by φ2⁢m=Φ⁡(f)=∑i,j=0N-1(pij)2. For example, the image filter may be a grey-level-occurrence-based homogeneity filter. This material filter is based on the grey level occurrence matrix defined above. The material feature of the grey-level-occurrence-based homogeneity filter may be given by φ2⁢m=Φ⁡(f)=∑i,j=0N-1pij1+❘"\[LeftBracketingBar]"i-j❘"\[RightBracketingBar]". For example, the image filter may be a grey-level-occurrence-based dissimilarity filter. This material filter is based on the grey level occurrence matrix defined above. The material feature of the grey-level-occurrence-based dissimilarity filter may be given by φ2⁢m=Φ⁡(f)=-∑i,j=0N-1pij⁢log⁢(pij). For example, the image filter may be a Law's energy filter. This material filter may be based on the laws vector L5=[1, 4, 6, 4, 1] and E5=[−1, −2, 0, −2, −1] and the matrices L5(E5)Tand E5(L5)T. The image fkis convoluted with these matrices: fk,L⁢5⁢E⁢5*(x,y)=∑i-22∑j-22fk(x+i,y+j)⁢L5(E5)Tandfk,E⁢5⁢L⁢5*(x,y)=∑i-22∑j-22fk(x+i,y+j)⁢E5(L5)T.E=∫fk,L⁢5⁢E⁢5*(x,y)max⁢(fk,L⁢5⁢E⁢5*(x,y))⁢dxdy,F=∫fk,E⁢5⁢EL⁢5*(x,y)max⁢(fk,E⁢5⁢L⁢5*(x,y))⁢dxdy, Whereas the material feature of Law's energy filter may be determined by φ2m=Φ(ƒ)=E/F. For example, the material dependent image filter may be a threshold area filter. This material feature may relate two areas in the image plane. A first area Ω1, may be an area wherein the function f is larger than α times the maximum of f. A second area Ω2, may be an area wherein the function f is smaller than α times the maximum of f, but larger than a threshold value ε times the maximum of f. Preferably α may be 0.5 and ε may be 0.05. Due to speckles or noise, the areas may not simply correspond to an inner and an outer circle around the spot center. As an example, Ω1may comprise speckles or unconnected areas in the outer circle. The material feature may be determined by φ2⁢m=Φ⁡(f)=∫Ω⁢11∫Ω⁢21,whereinΩ1={x⁢❘"\[LeftBracketingBar]"f⁡(x)>α*max⁢(f⁡(x))}⁢and⁢Ω2={x⁢❘"\[LeftBracketingBar]"ε*max⁢(f⁡(x))<f⁡(x)<α*max⁢(f⁡(x))}. The evaluation device346may be configured for using at least one predetermined relationship between the material feature φ2mand the material property of the surface of the object312having generated the reflection feature for determining the material property of the surface of the object312having generated the reflection feature. The predetermined relationship may be one or more of an empirical relationship, a semi-empiric relationship and an analytically derived relationship. The evaluation device346may comprise at least one data storage device for storing the predetermined relationship, such as a lookup list or a lookup table. The evaluation device346is configured for identifying a reflection feature as to be generated by illuminating biological tissue in case its corresponding material property fulfills the at least one predetermined or predefined criterion. The reflection feature may be identified as to be generated by biological tissue in case the material property indicates “biological tissue”. The reflection feature may be identified as to be generated by biological tissue in case the material property is below or equal at least one threshold or range, wherein in case the determined deviation is below and/or equal the threshold the reflection feature is identified as to be generated by biological tissue and/or the detection of biological tissue is confirmed. At least one threshold value and/or range may be stored in a table or a lookup table and may be determined e.g. empirically and may, as an example, be stored in at least one data storage device. The evaluation device346is configured for identifying the reflection feature as to be background otherwise. Thus, the evaluation device346may be configured for assigning each projected spot with a depth information and a material property, e.g. skin yes or no. The material property may optionally be determined by evaluating φ2msubsequently after determining of the longitudinal coordinate z such that the information about the longitudinal coordinate z can be considered for evaluating of φ2m. The evaluation device346may be configured for determining the longitudinal coordinate of the surface point or region having reflected the illumination feature. The evaluation device346may be configured for determining the beam profile information for each of the reflection features by using depth-from-photon-ratio technique. With respect to depth-from-photon-ratio (DPR) technique reference is made to the description above and to WO 2018/091649 A1, WO 2018/091638 A1, WO 2018/091640 A1, and WO 2021/214123 A1, the full content of each of which is incorporated herein by reference. Each component of the system300(e.g., the detector310, the projector311, the control unit347, and/or the evaluation device346) may fully or partially be integrated into the at least one housing305. The housing305may include an opening preferably located concentrically with regard to an optical axis of the detector310and defines a direction of view of the detector310. The components of the evaluation device346and/or the control unit347may fully or partially be integrated into a distinct device and/or may fully or partially be integrated into other components of the system300(e.g., the detector310and/or the projector311). Besides the possibility of fully or partially combining two or more components, the optical sensor330and/or the projector311and one or more of the components of the evaluation device346and/or control unit347may be interconnected by one or more connectors354and/or by one or more interfaces, as symbolically depicted inFIG.49. Further, instead of using the at least one optional connector354, the evaluation device346and/or the control unit347may fully or partially be integrated into the at least one housing305of the detector system300. Additionally or alternatively, the evaluation device346and/or the control unit347may fully or partially be designed as a separate device. The computer systems and computer-implemented methods discussed herein may include additional, less, or alternate actions and/or functionalities, including those discussed elsewhere herein. The computer systems may include or be implemented via computer-executable instructions stored on non-transitory computer-readable media. The methods may be implemented via one or more local or remote processors, transceivers, servers, and/or sensors (such as processors, transceivers, servers, and/or sensors mounted on mobile computing devices, or associated with smart infrastructure or remote servers), and/or via computer executable instructions stored on non-transitory computer-readable media or medium. As will be appreciated based upon the foregoing specification, the above-described embodiments of the disclosure may be implemented using computer programming or engineering techniques including computer software, firmware, hardware or any combination or subset thereof. Any such resulting program, having computer-readable code means, may be embodied or provided within one or more computer-readable media, thereby making a computer program product, i.e., an article of manufacture, according to the discussed embodiments of the disclosure. The computer-readable media may be, for example, but is not limited to, a fixed (hard) drive, diskette, optical disk, magnetic tape, semiconductor memory such as read-only memory (ROM), and/or any transmitting/receiving medium such as the Internet or other communication network or link. The article of manufacture containing the computer code may be made and/or used by executing the code directly from one medium, by copying the code from one medium to another medium, or by transmitting the code over a network. These computer programs (also known as programs, software, software applications, “apps”, or code) include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms “machine-readable medium” “computer-readable medium” refers to any computer program product, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The “machine-readable medium” and “computer-readable medium,” however, do not include transitory signals. The term “machine-readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor. As used herein, the term “database” may refer to either a body of data, a relational database management system (RDBMS), or to both. A database may include any collection of data including hierarchical databases, relational databases, flat file databases, object-relational databases, object oriented databases, and any other structured collection of records or data that is stored in a computing system. The above examples are for example only, and thus are not intended to limit in any way the definition and/or meaning of the term database. Examples of RDBMS's include, but are not limited to including, Oracle® Database, MySQL, IBM® DB2, Microsoft® SQL Server, Sybase®, and PostgreSQL. However, any database may be used that enables the systems and methods described herein. (Oracle is a registered trademark of Oracle Corporation, Redwood Shores, Calif.; IBM is a registered trademark of International Business Machines Corporation, Armonk, N.Y.; Microsoft is a registered trademark of Microsoft Corporation, Redmond, Wash.; and Sybase is a registered trademark of Sybase, Dublin, Calif.). As used herein, a processor may include any programmable system including systems using micro-controllers, reduced instruction set circuits (RISC), application specific integrated circuits (ASICs), logic circuits, and any other circuit or processor capable of executing the functions described herein. The above examples are example only, and are thus not intended to limit in any way the definition and/or meaning of the term “processor.” As used herein, the terms “software” and “firmware” are interchangeable, and include any computer program stored in memory for execution by a processor, including RAM memory, ROM memory, EPROM memory, EEPROM memory, and non-volatile RAM (NVRAM) memory. The above memory types are example only, and are thus not limiting as to the types of memory usable for storage of a computer program. In one embodiment, a computer program is provided, and the program is embodied on a computer readable medium. In an exemplary embodiment, the system is executed on a single computer system, without requiring a connection to a sever computer. In a further embodiment, the system is being run in a Windows® environment (Windows is a registered trademark of Microsoft Corporation, Redmond, Wash.). In yet another embodiment, the system is run on a mainframe environment and a UNIX® server environment (UNIX is a registered trademark of X/Open Company Limited located in Reading, Berkshire, United Kingdom). The application is flexible and designed to run in various different environments without compromising any major functionality. In some embodiments, the system includes multiple components distributed among a plurality of computing devices. One or more components may be in the form of computer-executable instructions embodied in a computer-readable medium. The systems and processes are not limited to the specific embodiments described herein. In addition, components of each system and each process can be practiced independent and separate from other components and processes described herein. Each component and process can also be used in combination with other assembly packages and processes. LIST OF REFERENCE NUMBERS 110detector112object113optical sensors114beacon device115sensor element116light beam117matrix118first optical sensor119mask120second optical sensor121light-sensitive area122first light-sensitive area124second light-sensitive area126optical axis of the detector128transfer device129optical axis of the transfer device130focal point131light spot132evaluation device133center detector134detector system135summing device136illumination source137combining device138illumination light beam140reflective element142divider144position evaluation device146camera148human-machine interface150entertainment device152tracking system154scanning system156connector158housing160control device162user164opening166direction of view168coordinate system170machine172track controller174array176optical sensor178quadrant photodiode180geometrical center of every182geometrical center of first optical sensor184geometrical center of second optical sensor186light spot188actuator190diaphragm192readout device for optical storage media194optical element196region of interest198first area200second area202inner region204plane206outer region208direction of movement210direction of movement212curve214curve216set of curves218set of curves300system305Housing310Detector312Object314Camera316Light beam316aPre-diffracted light beam316bDiffracted light beam318Dot320Light beam322Reflection beam324Reflection beam326Hood328First illumination source330Optical sensor332Light-sensitive area334Sensor element338Second illumination source340DOE344Transfer device346Evaluation device347Control unit348Processor350Memory352Database354Connector360Illumination pattern362Epipolar line400aDOE400bDOE400cDOE402Lens or refractive-diffractive element402aLens402bLens402cLens403Diffractive plate404Cavity406First end408Second end410Diverting element1110detector1112object1114beacon device1116light beam1118first optical sensor1120second optical sensor1122first light-sensitive area1124second light-sensitive area1126optical axis1128transfer device1130focal point1132evaluation device1134detector system1136illumination source1138illumination light beam1140reflective element1142divider1144position evaluation device1146camera1148human-machine interface1150entertainment device1152tracking system1154scanning system1156connector1158housing1160control device1162user1164opening1166direction of view1168coordinate system1170machine1172track controller1174fluorescent waveguiding sheet1176waveguiding1178matrix material1180fluorescent material1182photosensitive element1184photosensitive element1186photosensitive element1188photosensitive element1190edge1192edge1194edge1196edge1198optical filter element1200reference photosensitive element1202small light spot1204large light spot1206shadow1208summing device1210subtracting device1212photosensitive element1214corner1216optical coupling element2110detector2112object2113optical sensors2114beacon device2115Illumination source2116light beam2118first optical sensor2120second optical sensor2121light-sensitive area2122first light-sensitive area2124second light-sensitive area2126optical axis of the detector2128transfer device2129optical axis of the transfer device2130angle dependent optical element2131light beam2132first side2133evaluation device2134divider2136position evaluation device2138Optical fiber2140Illumination fiber2142Light beam2144First fiber2146Second fiber2148entrance end2150exit end2152first light beam2154Second light beam2156camera2158Detector system2160Human-machine interface2162Entertainment device2164Tracking system2166Scanning system2168connector2170housing2172Control device2174user2176opening2178Direction of view2180Coordinate system2182machine2184Track controller2186Line pattern2188curve2190curve2192curve2194curve2196curve2198curve2200curve2202curve2204curve2206Epipolar line
276,180
11860293
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS The drawings are to be regarded as being schematic representations and elements illustrated in the drawings are not necessarily shown to scale. Rather, the various elements are represented such that their function and general purpose become apparent to a person skilled in the art. Any connection or coupling between functional blocks, devices, components, or other physical or functional units shown in the drawings or described herein may also be implemented by an indirect connection or coupling. A coupling between components may also be established over a wireless connection. Functional blocks may be implemented in hardware, firmware, software, or a combination thereof. Hereinafter, techniques of coexistence of data communication and radar probing on a radio channel are described. To facilitate the coexistence, pilot signals of a radio access technology employed for the data communication are re-used as radar probe pulses for the radar probing. The pilot signals—sometimes also referred to as reference signals or sounding signals—may have well-defined spatial, temporal, and frequency transmit characteristics. Generally, the pilot signals may have well-defined transmit properties, such as waveform, amplitude, phase, etc. Conventionally, the pilot signals are employed for performing at least one of channel sensing and link adaptation. This typically helps to maintain or optimize the data communication. Additionality, such properties of the pilot signals as outlined above facilitate application of the pilot signals as radar probe pulses when participating in the radar probing. To implement such well-defined characteristics of the pilot signals, in some examples, one or more resource mappings may be employed to coordinate resource-usage of the data communication and the radar probing. The one or more resource mappings may define resource elements with respect to one or more of the following: frequency dimension; time dimension; spatial dimension; and code dimension. Sometimes, the resource elements are also referred to as resource blocks. Resource elements may thus have a well-defined duration in time domain and/or bandwidth in frequency domain. The resource elements may be, alternatively or additionally, defined with respect to a certain coding and/or modulation scheme. A given resource mapping may be defined with respect to a certain spatial application area or cell. Some of the resource elements may comprise one or more pilot signals. Other resource elements may relate to transmission blocks for the data. In some examples, different types of pilot signals may exist. E.g., there may be UL pilot signals and/or DL pilot signals. Some types of pilot signals may be used to tailor resource allocation while other types of pilot signals may be used to determine beamforming antenna weights. In some examples, all different types of pilot signals are re-used as radar probe pulses. In other examples, only some of the types of pilot signals are re-used as radar probe pulses. Generally, it is not required that all available pilot signals are re-used as radar probe pulses. By re-using the pilot signals for the radar probing, the radar probing can be implemented with no or little overhead. Data throughput of the data communication is not significantly reduced. At the same time, interference between the radar probing and the data communication can be effectively mitigated, because the pilot signals can preserve their function of enabling at least one of channel sensing and link adaptation of the radio channel—while offering extended functionality in form of the radar probing. Transmission blocks including data are typically suffering from strong interference, because they can be orthogonal to the resource elements comprising the pilot signals. By employing the radar probing in the context of a device configured for data communication, functionality of that device can be greatly enhanced. Examples include: positioning aid, traffic detection, drone landing assistance, obstacle detection, security detection, photography features, etc. Now referring toFIG.1, an example scenario of coexistence between radar probing109and data communication108—such as packetized data communication—is depicted. Here, the base station (BS)112of a cellular network (inFIG.1, the cells of the cellular network are not illustrated) implements the data communication108with the terminal (UE)130attached to the cellular network via a radio channel101. Communicating data may comprise transmitting data and/or receiving data. In the example ofFIG.1, the data communication108is illustrated as bidirectional, i.e. comprising uplink (UL) communication and downlink (DL) communication. E.g., the terminal130may be selected from the group comprising: handheld device; smartphone; laptop; drone; tablet computer; etc. The data communication108may be defined with respect to a radio access technology (RAT). The RAT may comprise a transmission protocol stack in layer structure. E.g., the transmission protocol stack may comprise a physical layer (Layer 1), a data link layer (Layer 2), etc. Here, a set of rules may be defined with respect to the various layers which rules facilitate the data communication. E.g., the Layer 1 may define transmission blocks for the data communication108and pilot signals. While with respect toFIG.1and the following FIGS., various examples are provided with respect to a cellular network where handovers are supported between a plurality of cells, in other examples, respective techniques may be readily applied to non-cellular point-to-point networks. Examples of cellular networks include the Third Generation Partnership Project (3GPP)—defined networks such as 3G, 4G and upcoming 5G. Examples of point-to-point networks include Institute of Electrical and Electronics Engineers (IEEE)—defined networks such as the 802.11x Wi-Fi protocol or the Bluetooth protocol. As can be seen, various RATs can be employed according to various examples. The data communication108is supported by, both, the BS112, as well as the terminal130. The data communication108employs a shared channel105implemented on the radio channel101. The shared channel106comprises an UL shared channel and a DL shared channel. The data communication108may be used in order to perform uplink and/or downlink communication of application-layer user data between the BS112and the terminal130. As illustrated inFIG.1, furthermore, a control channel106is implemented on the radio channel101. Also, the control channel106is bidirectional and comprises an UL control channel and a DL control channel. The control channel106can be employed to implement communication of control messages. E.g., the control messages can allow to set up transmission properties of the radio channel101. Both, performance of the shared channel105, as well as performance of the control channel106are monitored based on pilot signals. The pilot signals, sometimes also referred to as reference signals or sounding signals, can be used in order to determine the transmission characteristics of the radio channel101. In detail, the pilot signals can be employed in order to perform at least one of channel sensing and link adaptation. Channel sensing can enable determining the transmission characteristics such as likelihood of data loss, bit error rate, multipath errors, etc. of the radio channel101. Link adaptation can comprise setting transmission properties of the radio channel101such as modulation scheme, bit loading, coding scheme, etc. The radar probing109can be used in order to determine the position and/or velocity of passive objects in the vicinity of the BS112and/or the terminal130. The radar probing109may involve the analysis of an echo from a transmitted radar probe pulse. Here, radial and/or tangential velocity may be determined. For this, one or more receive properties of echoes of the radar probe pulses can be employed as part of the radar probing. Echoes are typically not transmitted along a straight line, hereinafter referred to non line-of-sight (LOS) for sake of brevity, but affected by reflection at the surface of an object. The receive properties may be locally processed at the radar receiver; and/or may be provided to a further entity such as the radar transmitter for processing to yield the position and/or the velocity. As illustrated inFIG.1, also the radar probing109is supported by, both, the BS112, as well as the terminal130. Thus, data communication108and radar probing109coexists in the hardware of the BS112and the terminal130. In the example ofFIG.1, the terminal130is connected to the base station112and is associated with a given cell of the cellular network. Typically, the pilot signals communicated between the base station112and the terminal130for channel sensing and/or link adaptation of the channel101of the respective cell are orthogonal to a further pilot signals of a further cell and/or include a respective cell identifier unique for the respective cell. It is possible that the radar probing109relies only on pilot signals associated with the respective cell to which the terminal130is connected. Alternatively or additionally, it is also possible that further pilot signals of neighboring or adjacent cells are taken into consideration as part of the radar probing. Here, it is possible that the BS112implements the radar transmitter and/or the radar receiver. Likewise, it is possible that the terminal130implements the radar transmitter and/or the radar receiver. The radar transmitter is configured to transmit radar probe pulses. Likewise, the radar receiver is configured to receive echoes of radar probe pulses reflected from passive objects. In this regard, the pilot signals employed as radar probe pulses may comprise UL pilot signals and/or DL pilot signals. In a first example, radar probe pulses are transmitted by the BS112and corresponding echoes are received by the BS112. In a second example, radar probe pulses are transmitted by the BS112and corresponding echoes are received by the terminal130. In a third example, radar probe pulses are transmitted by the terminal130and corresponding echoes are received by the terminal130. In a fourth example, radar probe pulses are transmitted by the terminal130and corresponding echoes are received by the BS112. While with respect toFIG.1a two-device scenario is illustrated, in further examples, it is possible that more than two devices participate in the radar probing109as radar transmitters and/or radar receivers, respectively. E.g., further terminals connected to the cellular network (not shown inFIG.1) may participate in the radar probing109. Generally, the techniques described herein may be implemented on the various devices of the network such as the BS112or one or more terminals130of the network. FIG.2illustrates aspects with respect to the resource mapping155. As illustrated inFIG.2, the resource mapping155is defined in frequency domain (vertical axis inFIG.2) and time domain (horizontal axis inFIG.2). The rectangular blocks inFIG.2illustrates different resource elements. The resource elements160correspond to transmission blocks for the data communication108. Differently, the resource elements161-163—which are orthogonal to the resource elements160—include pilot signals used as radar probe pulses for radar probing109. The different resource elements161-163may correspond to different types of pilot signals. It is also possible that the different resource elements161-163correspond to pilot signals associated with different cells. The resource elements161-163reserved for the pilot signals may be arranged in an intermittent sequence having a certain periodicity151(inFIG.2for sake of simplicity only a single repetition of the sequence of resource elements161-163is fully depicted). It is also possible that pilot signals are continuously transmitted. In some examples, the resource mapping155may depend on the particular cell identification implemented by a corresponding BS112. I.e., in order to mitigate inter-cell interference, it is possible that neighboring cells—or virtual cells—implement different resource mappings155. Then, pilot signals in a first cell may be transmitted in resource elements161-163which are orthogonal with respect to the resource elements of a second cell neighboring to the first cell. Generally, the techniques described herein are not limited to a particular spectrum or band. E.g., the spectrum occupied by the resource mapping155may be a licensed band or an unlicensed band. Typically, in an unlicensed band un-registered devices can gain access. Sometimes, in a licensed band a repository may keep track of all eligible subscribers; differently in an unlicensed band such a database of eligible subscribers may not exist. Different operators may access the unlicensed band. E.g., the spectrum occupied by the resource mapping155may be at least partially above 6 GHz, preferably at least partially above 15 GHz, more preferably at least partially above 30 GHz. Typically, with increasing frequencies, the aperature of an antenna decreases. Here, due to the well-defined directional transmission characteristics of the electromagnetic waves employed for the radar probing109, a high spatial resolution may be achieved when determining the position of passive objects as part of the radar probing109. FIG.3illustrates aspects with respect to a radar probe pulse171transmitted and/or received during one of the resource elements161-163. The radar probe pulse171is implemented by a pilot signal. The radar probe pulse171comprises a probing pulse section165. Optionally, the radar probe pulse171may comprise a data section166encoding data that can help to implement the radar probing109. E.g., the probing pulse section165may comprise a waveform having spectral contributions arranged within the frequency associated with the respective resource element161-163. An amplitude of the waveform may be modulated; this is sometimes referred to as an envelope. The envelope may have a rectangular shape, a sinc-function shape, or any other functional dependency depending on the implementation. The duration of the probing pulse section165is sometimes referred to as pulse width. The pulse width may be shorter than the duration of the respective resource element161-163to enable reception of an echo of the radar probe pulse171during the duration of the respective resource element161-163, taking into account time of travel. In some examples, one or more symbols included in the probing pulse section165may be generated based on a generator code. Here, depending on the particular resource element161-163employed for the respective pilot signal/radar probe pulse171, the probing pulse section165may differ. Generally, different types of pilot signals/radar probe pulses171may employ different proving pulse sections165. The waveform of the probing pulse section165may have well-defined transmit properties. This facilitates channel sensing and/or link adaptation to be performed based on the received properties of the probing pulse section165. The optional data section166may include additional information which is suited to facilitate the radar probing109. Such information may comprise: information on the radar transmitter, such as an identity; position; cell identity; virtual cell identity; etc.; and/or information on the radar probe pulse171itself such as a time of transmission; directional transmission profile; etc. Such information may be, generally, included explicitly or implicitly. E.g., for implicit inclusion of respective information, a lookup scheme communicated via the control channel106implemented on the radio channel101may be employed to enable inclusion of compressed flags. While in the example ofFIG.3such information is included in the data section166of the radar probe pulse171itself, in other examples it is also possible that such information is communicated separately from the radar probe pulse171, e.g., in a control message communicated on the control channel106in one of the transmission blocks160. Here, cross-reference between the control message and the radar probe pulse171may be achieved, e.g., a unique temporal arrangement of the radar probe pulse171and the control message or inclusion of a characteristic identifier in the control message and the radar probe pulse171. In some examples, additional information which is shown in the example ofFIG.3to be included in the data section166may be pre-negotiated. E.g., depending on the particular resource element161-163employed for transmission of the respective pilot signal/radar probe pulse171, the respective parameters may be known to the radar receiver and/or radar transmitter based on negotiated rules. Here, it may not be required to separately transmit this information. In some examples, the different ones of the radar probe pulses171may be orthogonal with respect to each other. Here, orthogonality of the radar probe pulses171may be achieved by employing resource elements161-163for their transmission which differ from each other with respect to one or more of the following: frequency dimension; time dimension; spatial dimension; and code dimension. Sometimes, these cases are referred to frequency division duplexing (FDD), time division duplexing (TDD), spatial division duplexing; and code division duplexing (CDD). By employing orthogonal resource elements for different radar probe pulses171, interference between separate instances of the radar probing109may be mitigated. With reference to bothFIGS.2and3, in some examples, multiple resource elements161-163reserved for pilot signals are aligned such that they are adjacent in time domain and/or frequency domain. This allows to achieve a wider bandwidth for radar probing109. A better accuracy can be achieved. FIG.4schematically illustrates an example of the radar probing109. Here, the BS112is the radar transmitter. The BS112thus transmits radar probe pulses. The BS112implements a cell110of the cellular network. The cell110extends around the BS112. The radar probe pulses171, in the example ofFIG.4, have isotropic directional transmission profiles180, i.e., have substantially the same amplitude for various orientations of transmission with respect to the BS112(schematically illustrated by the dashed circle inFIG.4). Thus, an amplitude or phase of the radar probe pulses does not show a significant dependency on the transmission direction. The radar probe pulses171can travel along a LOS direction from the BS112to the terminal130(dotted arrow inFIG.4). The radar probe pulses171are also reflected by a passive object140, e.g., an obstacle, a car, a plant, a house, a car, a person, etc. The passive object140is not required to have communication capability. Thus, the passive objection140may not be configured to communicate on the radio channel101,105,106. Due to the reflection at the passive object140, echoes172of the radar probe pulses171are created. These echoes172may be received by the terminal130and/or the BS112, as indicated inFIG.4by the respective arrows. In some examples, a direction of the echoes172and/or a phase shift of the echoes172may be characteristic of the position or shape of the object140. A Doppler shift of the echoes172may be characteristic of the velocity of the object140. FIG.5Ais a signaling diagram of communication between the BS112and the terminal130. The communication illustrated in the example ofFIG.5Afacilitates the radar probing109. First, at1001, the radio channel101is established between the BS112and the terminal130. Here, an attachment procedure can be executed. Subsequently, the terminal130may be operated in connected mode. Typically during the attachment procedure, the particular resource mapping155to be used—including the position of the resource elements161-163used for transmission of the pilot signals, as well as the position of the transmission blocks160—is negotiated between the BS112and the terminal130. E.g., this can be implemented by transmitting the cell identifier of the cell to which the terminal130is connected to the terminal130. The cell identifier can be uniquely associated with a given resource mapping155to be used. Then, at1002, transmission of the radar probe pulse171is effected. In the example ofFIG.5A, the BS112transmits the radar probe pulse171. The radar probe pulse171is implemented by a pilot signal. In the example ofFIG.5A, an echo172of the radar probe pulse171is received by the terminal130. In the example ofFIG.5A, the terminal130evaluates the reception of the radar probe pulse171to some degree. In detail, the terminal analyzes the raw receive data and determines certain receive properties of the echo172, e.g.: angle of arrival; time-of-flight; Doppler shift; and/or receive power level. Thus, the terminal130is configured to determine the one or more receive properties based on the received echoes172. The terminal then sends a report message1003to the BS112. The report message is indicative of the determined one or more receive properties of the echo172. Optionally, the report message1003is indicative of a relative or absolute position of the terminal130. Based on the one or more receive properties—and optionally further based on the position of the terminal130as obtained from the report message1003, if not otherwise known to the BS112—, the BS112may then use this information to determine the position and/or velocity of the passive object associated with the echo172. In detail, where the absolute or relative position of the terminal130—e.g., with respect to the BS112—is known, it is possible to conclude back on the position of the passive object, e.g., by means of triangulation, etc. Similar considerations apply with respect to the direction of movement of the passive object140. Also illustrated inFIG.5Ais a scenario where the terminal130receives the radar probe pulse171in a LOS transmission,1010. I.e., the terminal130does not (necessarily) receive an echo of the radar probe pulse171at1010, but receives the non-reflected radar probe pulse171. Because the radar probe pulse171is implemented by a pilot signal, it is possible to perform channel sensing and/or link adaptation based on the respective pilot signal. For this, the terminal130sends a measurement report1011indicative of at least one receive property of the pilot signal back to the BS112. The BS112can then perform channel sensing and/or link adaptation based on the indicated at least one receive property. Channel sensing and/or link adaptation can also be performed based on UL pilot signals/radar probe pulses171. Further, channel sensing and/or link adaptation can also be performed based on at least one receive property of an echo172of a pilot signal. E.g., in one example, it is possible that the echo172of the radar probe pulse171received at1002corresponds to pilot signal associated with a neighboring cell of the cell110to which the terminal130is connected to. This may be the reason why channel sensing and/or link adaptation is not implemented based on the receive properties of the transmission at1002. However, the pilot signal implementing the radar probe pulse171received at1010may be associated with the cell110to which the terminal130is connected to. Because of this, channel sensing and/or link adaptation can be implemented based on the receive properties of the transmission at1010. Whether or not a pilot signal is associated with the respective cell110may be derived from a cell identifier included in the respective pilot signal and/or based on knowledge on the respective resource mappings155. In particular, pilot signals associated with different cells110may be orthogonal with respect to each other, e.g., my be transmitted in resource elements161-163which are orthogonal in time domain, frequency domain, code domain, etc. Therefrom, it is possible to conclude back on the particular cell110to which a given pilot signal belongs to. Such examples as described above with respect to the physical cell110may also be implemented for virtual cells. FIG.5Bis a signaling diagram of communication between the BS112and the terminal130. The example ofFIG.5Bgenerally corresponds to the example ofFIG.5A. However, in the example ofFIG.5B, further processing as part of the radar probing109is performed at the terminal130. In particular, the terminal130already evaluates one or more receive properties of the echo172to determine the relative or absolute position of the object140. This position and/or velocity is included in the report message1004. Then, the BS112may receive the report message1004. The BS112, in some examples, may fuse respective information, e.g., on the position and the velocity, received from a plurality of terminals. Here, also the position and/or the velocity as determined from an echo172received by the BS112itself may be taken into consideration. This may increase an accuracy of the radar probing109. In the various examples, the amount of logic residing at the terminal130—and, generally, the radar receiver—may vary. In one example, raw information on the received echo172is reported to the radar transmitter—e.g., the BS112. In other examples, some processing of the raw information is performed, e.g., as in the example ofFIG.5A, to determine one or more receive properties and/or to compress the raw information. In other examples, it is even possible to determine the position of the object140from which the echo172originates. Then, this position can be reported to the radar transmitter—e.g., the BS112. While above various examples have been described with respect to radar probe pulses171having an isotropic directional transmission profile180, it is also possible that the radar probe pulses171have an anisotropic directional transmission profiles. FIG.6schematically illustrates an example of radar probing109where the employed radar probe pulses171have anisotropic directional transmission profiles181-183. The anisotropic directional transmission profiles181-183are associated with a dependency of the amplitude of the respective radar probe pulses171with respect to an orientation against the radar transmitter, in the example ofFIG.6with respect to the BS112. In the example ofFIG.6, the anisotropic direction transmission profiles181-183are implemented by corresponding pencil beams, but generally other shapes are conceivable. The anisotropic directional transmission profiles181-183may be employed based on techniques of beamforming. For beamforming, amplitude and phase of antennas of an antenna array are varied according to certain antenna weights. The antenna weights are typically determined based on techniques of channel sensing, i.e., depending on receive properties of pilot signals. Thereby, constructive and destructive interference may be achieved for different directions with respect to the transmitter. This results in the anisotropic direction transmission profile181-183. As illustrated inFIG.6, a plurality of different anisotropic directional transmission profiles182is implemented for different radar probe pulses171. In particular, the different anisotropic directional transmission profiles181-183are associated with different radar probe pulses171. Thus, different ones of the radar probe pulses171may have different anisotropic directional transmission profiles. Thereby, it is possible to obtain a high spatial resolution for the radar probing109. While in the example ofFIG.6, only three anisotropic directional transmission profiles181-183are illustrated for sake of simplicity, in general, a plurality of anisotropic directional transmission profiles181-183may be employed, e.g., to cover the entire surrounding of the radar transmitter. In the exampleFIG.6, the anisotropic directional transmission profiles182are implemented as pencil beams. By implementing well-defined or narrow anisotropic directional transmission profiles181-183, e.g., in the form of pencil beams as illustrated inFIG.6, a high spatial resolution of the radar probing109can be achieved. This is apparent fromFIG.6where the radar probe pulse171of the profile182is reflected by the passive object140; the respective echoes172are being received by, both, the BS112, as well as the terminal130. On the other hand, the radar probe pulse171of the profile183is not reflected by the passive object140, because it is positioned outside the profile183. FIG.7schematically illustrates an example of radar probing109where the employed radar probe pulses171have anisotropic directional transmission profiles181-183.FIG.7generally corresponds to the example ofFIG.6. However, in the example ofFIG.7, the different anisotropic directional transmission profiles181-183are associated with different virtual cells111of the BS112(inFIG.7only the virtual cell111associated with the anisotropic directional transmission profile181is illustrated for sake of simplicity; the terminal130is connected to the virtual cell111associated with the anisotropic direction transmission profile183). The various virtual cells111may be associated with different cell identifiers and may, hence, employ different resource mappings155in some examples. Pilot signals communicated in the different virtual cells111may be orthogonal to each other. The virtual cells111may facilitate spatial orthogonality of the data communication108. In some examples, it is possible that the virtual cells111are associated with one or more than one BS (not shown inFIG.7). The concept of virtual cells111may be associated with the comparably small aperture of high-frequency electromagnetic waves, e.g., above 6 or 30 GHz. To implement different virtual cells111, the BS112may have duplex capability. Here, full duplex (FD) or half duplex (HD) scenarios may be implemented. Respective considerations may also apply to the terminal130. As illustrated inFIG.7, the terminal130—connected to the virtual cell111associated with the anisotropic directional transmission profile183—also receives an echo172of the radar probe pulse171implemented by a pilot signal of the virtual cell111associated with the anisotropic directional transmission profile182, i.e., an echo of a pilot signal of a neighboring cell. By associating the different virtual cells111with the different radar probe pulses171, the concept of spatial diversity implemented by the BS112can be re-used to provide a high spatial resolution for the radar probing109. I.e., where different virtual cells111are associated with the anisotropic directional transmission profiles181-183, anyway, the respective pilot signals can be efficiently re-used as radar probe pulses171. E.g., based on the at least one receive property of the echo172pilot signal/the radar probe signal171, it is possible to initiate a handover between neighboring virtual cells111. In an example where the terminal130receives a strong echo172or signal along the direct path of the pilot signal associated with the virtual cell111defined by the anisotropic direction transmission profile182, this can be used to trigger the handover to that virtual cell111. In some examples, it is also possible to consider results from the radar probing109in triggering the handover between different cells110,111of the cellular network. E.g., it would be possible to consider the position and/or velocity of the object140in the handover. E.g., if significant obstruction of the LOS transmission path is expected to result from the object140changing its position with respect to the terminal130, this can be taken into account when triggering the handover. FIG.8schematically illustrates an example of the radar probing109where the employed radar probe pulses171have anisotropic directional transmission profiles181-183. Here, more than two devices—in the example ofFIG.8, the terminals130,131and the BS112—may participate in the radar probing109. In the present example, the BS112is the radar transmitter. It is possible that the BS112fuses information received from the terminals130,131when determining the position and the velocity of the object140. For this, the BS112may receive report messages1003,1004from each one of the terminals130,131. Additionally, the BS112may take into consideration the echo172directly received by the BS112when determining the position at the velocity of the object140. By taking into account a plurality of sources of information regarding the radar probing109, the accuracy in determining the position and the velocity of the object140as part of the radar probing109can be increased. FIG.9schematically illustrates an example of the radar probing109where the employed radar probe pulses171have anisotropic directional transmission profiles181-183. In the example ofFIG.9, it is illustrated that the radar probe pulse171may be received by the terminal130in a LOS transmission; while the respective echo172is reflected back to the BS112(and optionally also to the terminal130; not illustrated inFIG.9). Here, it is possible that the LOS transmission of the pilot signal implementing the radar probe pulse171is used for channel sensing and/or link adaptation. The reflection of the echo172can be used as part of the radar probing109. As can be seen, one and the same waveform can be re-used as a pilot signal on the one hand side and a radar probe pulse on the other hand side. FIG.10is a schematic illustration of the BS112. The BS comprises a processor1122, e.g., a multicore processor. The BS112further comprises a radio transceiver1121. The radio transceiver1121is configured to communicate on the radio channel101, e.g., by transmitting and receiving (transceiving). Furthermore, the radio transceiver1121is configured to transmit and/or receive radar probe pulses171. The processor1122can be configured to perform techniques as described herein with respect to coexistence of data transmission108and radar probing109. For this, a non-volatile memory may be provided which stores respective control instructions. FIG.11is a schematic illustration of the terminal130. The terminal130comprises a processor1302, e.g., a multicore processor. The terminal130further comprises a radio transceiver1301. The radio transceiver1301is configured to communicate on the radio channel101, e.g., by transceiving. Furthermore, the radio transceiver1301is configured to transmit and/or receive radar probe pulses171. The processor1302can be configured to perform techniques as described herein with respect to coexistence of data transmission108and radar probing109. For this, a non-volatile memory may be provided which stores respective control instructions. FIG.12schematically illustrates the transceivers1121,1301in greater detail. The transceivers1121,1301comprise an antenna array1400in the illustrated example. The antenna array1400may support multiple input multiple output (MIMO) scenarios. Based on the antenna array1400, it is possible to employ an anisotropic sensitivity profile during reception, e.g., of an echo172of a radar probe pulse171. E.g., in some examples, it is possible that the accuracy of the radar probing109is further increased by employing an anisotropic sensitivity profile of the antenna array1400of the radio transceiver1121,1301. Such anisotropic sensitivity profile of the antenna array1400may be combined with an isotropic directional transmission profile180or and anisotropic directional transmission profile181-183of the respective radar probe pulse171. The example ofFIG.12, the transceivers1121,1301comprise a single antenna array1400. In further examples, it is possible that the transceivers1121,1301comprise a plurality of antenna arrays1400. The plurality of antenna arrays1400may be oriented differently to cover different directions with respect to the respective device. Omnidirectional coverage can be provided. FIG.12furthermore schematically illustrates receive properties such as the receive power level1413; the angle of arrival1412; and the time-of-flight1411. Further receive properties of interest regarding the radar probing109include the Doppler shift which may be used in order to determine a velocity of the object140, e.g., in radial direction. E.g., the angle of arrival1412may be determined in absolute terms, e.g., with respect to a magnetic North direction provided by a separate compass and gravity meter for elevation (not illustrated inFIG.12), etc. It is also possible that the angle of arrival1412is determined in relative terms, e.g., with respect to a characteristic direction of the antenna array1400. Depending on the definition of the angle of arrival1412and/or the further receive properties, corresponding information may be included in a respective report message1003. A further receive property is the phase shift, e.g., with respect to an arbitrary reference phase such as the phase of non-reflected radar probe pulse, etc. FIG.13is a flowchart of a method according to various embodiments. E.g., the method ofFIG.13may be executed by the processor1122of the BS112and/or by the processor1302of the terminal130. First, at3001, data communication108is executed. For this, packetized data may be transmitted and/or received on the radio channel111in the transmission blocks160. Typically, the data communication108is implemented based on LOS signal propagation. Second, at3002, participation in the radar probing109is executed. In the depicted example, the radar probing109is implemented based on the transmission of pilot signals. In detail, the pilot signals are re-used as radar probe pulses171. Typically, the radar probing109is executed based on non-LOS signal propagation, i.e., based on echoes.3002may comprise one or more of the following: transmitting a radar probe pulse171(cf.FIG.14:3011); receiving an echo172of a radar probe pulse171(cf.FIG.15:3021); determining at least one of a position in the velocity of a passive object based on at least one receive property1411-1413of the radar probe pulse171; determining the at least one receive property1411-1413from an a received echo172; receiving a control message1003indicating at least one of the at least one receive property1411-1413, a position, and a velocity of a radar receiver. Summarizing, above techniques have been illustrated which enable to use reflection of pilot signals to implement radar probing. Radar probing can allow to determine the position and/or the velocity of passive objects by echoes of the pilot signals. These techniques are based on the finding that properties of the electromagnetic waves at higher frequencies are more contained. E.g., transmission of high-frequency electromagnetic waves may be associated with comaprably narrow anisotropic directional transmission profiles. This may be used to obtain radar pictures by the radar probing having a high spatial resolution. The radar probing may comprise determining one or more receive properties of echoes such as power level, delay profiles, angle of arrival, Doppler shift, phase shift, etc. In some examples, the radar probe pulses implemented by the pilot signals are transmitted into well-defined directions. For this, anisotropic directional transmission profiles are employed. E.g., pencil beams having an opening angle of less than 90°, preferably less than 45°, more preferably of less than 20° may be employed. Then, it is possible that the radar transmitter also implements reception of echoes of the radar probe pulses. E.g., the same antenna array used for transmitting the radar probe pulses can be used to receive echoes of the radar probe pulses. I.e., the radar transmitter and the radar receiver may be co-located. In some examples, the radar transmitter may be implemented by a first device and the radar receiver may be implemented by a different, second device. Generally, it is possible that multiple distributed antenna arrays are used for receiving echoes of the radar probe pulses. In such examples, report messages may be communicated between the first and second devices. Depending on the particular implementation, the information content included in such report messages may vary: in one example, the radar receiver may report back raw data of the received echo. In other examples, the radar receiver may perform some postprocessing to obtain, e.g., receive property such as the angle of arrival, power level, etc.; or even determine the position and/or velocity of the passive object from which the echo originates. In some examples, the above-identified anisotropic directional transmission profiles can be implemented for pilot signals transmitted by the base station into different beam angles associated with different virtual cells. Here, the pilot signals of each virtual cell can be orthogonal to pilot signals of other virtual cells. The pilot signals can be received by one or more terminals and can be used for channel sensing and/or link adaptation. E.g., antenna weights of an antenna array can be determined based on one or more receive properties of the pilot signals. In addition to such usage of the pilot signals for channel sensing and/or link adaptation, it is also possible that the terminal determines one or more receive properties for available reflections/echoes of the pilot signals. Here, optionally, also echoes of the pilot signals from neighbor virtual cells can be taken into account. Although the invention has been shown and described with respect to certain preferred examples, equivalents and modifications will occur to others skilled in the art upon the reading and understanding of the specification. The present invention includes all such equivalents and modifications and is limited only by the scope of the appended claims. E.g., while above various examples have been described with respect to radar probe pulses transmitted by the BS, respective techniques may be readily implemented with respect to radar probe pulses transmitted by the terminal. E.g., in some examples device-to-device or vehicle-to-vehicle scenarios may be combined with radar probing. Here, it is not required that the BS is involved as radar transmitter and/or radar receiver.
42,258
11860294
DETAILED DESCRIPTION Overview Integrating a radar system within an electronic device can be challenging. The electronic device, for example, may have a limited amount of available space. To meet a size or layout constraint of the electronic device, the radar system can be implemented with fewer antennas. This can make it challenging, however, for the radar system to realize a target angular resolution. To address this challenge, techniques are described that implement electromagnetic vector sensors (EMUS) for a smart-device-based radar system. Instead of including an antenna array of similar antenna elements, the radar system includes two or more electromagnetic vector sensors. At least one of the electromagnetic vector sensors is used for transmission and at least another of the electromagnetic vector sensors is used for reception. Each electromagnetic vector sensor includes a group of antennas with different antenna patterns, orientations, and/or polarizations. The various antenna patterns and polarizations of these antennas enable the radar system to perform angle estimation, object or material classification, and/or multipath interference rejection. An overall footprint of the two electromagnetic vector sensors (e.g., one for transmission and one for reception) can be smaller than antenna arrays used by other radar systems, thereby enabling the radar system to be implemented within space-constrained devices. Operating Environment FIG.1is an illustration of example environments100-1to100-6in which techniques using, and an apparatus including, a smart-device-based radar system with electromagnetic vector sensors may be embodied. In the depicted environments100-1to100-6, a smart device104includes a radar system102capable of detecting one or more objects (e.g., users) using electromagnetic vector sensors (ofFIG.2). The smart device104is shown to be a smartphone in environments100-1to100-5and a smart vehicle in the environment100-6. In the environments100-1to100-4, a user performs different types of gestures, which are detected by the radar system102. In some cases, the user performs a gesture using an appendage or body part. Alternatively, the user can also perform a gesture using a stylus, a hand-held object, a ring, or any type of material that can reflect radar signals. The radar system102uses electromagnetic vector sensors to recognize the gesture that is performed. The radar system102can also use electromagnetic vector sensors to distinguish between multiple users, which may or may not be at a same distance (e.g., slant range) from the radar system102. In environment100-1, the user makes a scrolling gesture by moving a hand above the smart device104along a horizontal dimension (e.g., from a left side of the smart device104to a right side of the smart device104). In the environment100-2, the user makes a reaching gesture, which decreases a distance between the smart device104and the user's hand. The users in environment100-3make hand gestures to play a game on the smart device104. In one instance, a user makes a pushing gesture by moving a hand above the smart device104along a vertical dimension (e.g., from a bottom side of the smart device104to a top side of the smart device104). Using electromagnetic vector sensors, the radar system102can recognize the gestures performed by the user. In the environment100-4, the smart device104is stored within a purse, and the radar system102provides occluded-gesture recognition by detecting gestures that are occlude by the purse. The radar system102can also recognize other types of gestures or motions not shown inFIG.1. Example types of gestures include a knob-turning gesture in which a user curls their fingers to grip an imaginary doorknob and rotate their fingers and hand in a clockwise or counter-clockwise fashion to mimic an action of turning the imaginary doorknob. Another example type of gesture includes a spindle-twisting gesture, which a user performs by rubbing a thumb and at least one other finger together. The gestures can be two-dimensional, such as those used with touch-sensitive displays (e.g., a two-finger pinch, a two-finger spread, or a tap). The gestures can also be three-dimensional, such as many sign-language gestures, e.g., those of American Sign Language (ASL) and other sign languages worldwide. Upon detecting each of these gestures, the smart device104can perform an action, such as display new content, move a cursor, activate one or more sensors, open an application, and so forth. In this way, the radar system102provides touch-free control of the smart device104. In the environment100-5, the radar system102generates a three-dimensional map of a surrounding environment for contextual awareness. The radar system102also detects and tracks multiple users to enable both users to interact with the smart device104. The radar system102can also perform vital-sign detection. In the environment100-6, the radar system102monitors vital signs of a user that drives a vehicle. Example vital signs include a heart rate and a respiration rate. If the radar system102determines that the driver is falling asleep, for instance, the radar system102can cause the smart device104to alert the user. Alternatively, if the radar system102detects a life threatening emergency, such as a heart attack, the radar system102can cause the smart device104to alert a medical professional or emergency services. In some implementations, the radar system102in the environment100-6can support collision avoidance for autonomous driving. Some implementations of the radar system102are particularly advantageous as applied in the context of smart devices104, for which there is a convergence of issues. This can include a need for limitations in a spacing and layout of the radar system102and low power. Exemplary overall lateral dimensions of the smart device104can be, for example, approximately eight centimeters by approximately fifteen centimeters. Exemplary footprints of the radar system102can be even more limited, such as approximately four millimeters by six millimeters with the electromagnetic vector sensors included. Exemplary power consumption of the radar system102may be on the order of a few milliwatts to tens of milliwatts (e.g., between approximately two milliwatts and twenty milliwatts). The requirement of such a limited footprint and power consumption for the radar system102enables the smart device104to include other desirable features in a space-limited package (e.g., a camera sensor, a fingerprint sensor, a display, and so forth). The smart device104and the radar system102are further described with respect toFIG.2. FIG.2illustrates the radar system102as part of the smart device104. The smart device104is illustrated with various non-limiting example devices including a desktop computer104-1, a tablet104-2, a laptop1043, a television104-4, a computing watch104-5, computing glasses104-6, a gaming system104-7, a microwave104-8, and a vehicle104-9. Other devices may also be used, such as a home service device, a smart speaker, a smart thermostat, a security camera, a baby monitor, a Wi-Fi™ router, a drone, a trackpad, a drawing pad, a netbook, an e-reader, a home automation and control system, a wall display, and another home appliance. Note that the smart device104can be wearable, non-wearable but mobile, or relatively immobile (e.g., desktops and appliances). The radar system102can be used as a stand-alone radar system or used with, or embedded within, many different smart devices104or peripherals, such as in control panels that control home appliances and systems, in automobiles to control internal functions (e.g., volume, cruise control, or even driving of the car), or as an attachment to a laptop computer to control computing applications on the laptop. The smart device104includes one or more computer processors202and at least one computer-readable medium204, which includes memory medium and storage medium. Applications and/or an operating system (not shown) embodied as computer-readable instructions on the computer-readable medium204can be executed by the computer processor202to provide some of the functionalities described herein. The computer-readable medium204also includes a radar-based application206, which uses radar data generated by the radar system102to perform a function, such as presence detection, gesture-based touch-free control, collision avoidance for autonomous driving, human vital-sign notification, and so forth. The smart device104can also include a network interface208for communicating data over wired, wireless, or optical networks. For example, the network interface208may communicate data over a local-area-network (LAN), a wireless local-area-network (WLAN), a personal-area-network (PAN), a wire-area-network (WAN), an intranet, the Internet, a peer-to-peer network, point-to-point network, a mesh network, and the like. The smart device104may also include a display (not shown). The radar system102includes a communication interface210to transmit the radar data to a remote device, though this need not be used when the radar system102is integrated within the smart device104. In general, the radar data provided by the communication interface210is in a format usable by the radar-based application206. The radar system102also includes at least one transmit electromagnetic vector sensor212, at least one receive electromagnetic vector sensor214, and at least one transceiver216to transmit and receive radar signals. The transmit electromagnetic vector sensor212includes at least two antennas associated with different polarizations. The receive electromagnetic vector sensor214includes at least three antennas associated with different polarizations. The antennas of the transmit electromagnetic vector sensor212and the receive electromagnetic vector sensor214can be horizontally polarized, vertically polarized, or circularly polarized. In some situations, the transmit electromagnetic vector sensor212and the receive electromagnetic vector sensor214implement a multiple-input multiple-output (MIMO) radar capable of transmitting and receiving multiple distinct waveforms at a given time. The transceiver216includes circuitry and logic for transmitting radar signals via the transmit electromagnetic vector sensor212and receiving reflected versions of the radar signals via the receive electromagnetic vector sensor214. Components of the transceiver216can include amplifiers, phase shifters, mixers, switches, analog-to-digital converters, or filters for conditioning the radar signals. The transceiver216also includes logic to perform in phase/quadrature (I/Q) operations, such as modulation or demodulation. A variety of modulations can be used, including linear frequency modulations, triangular frequency modulations, stepped frequency modulations, or phase modulations. Alternatively, the transceiver216can produce radar signals having a relatively constant frequency or a single tone. The transceiver216can be configured to support continuous-wave or pulsed radar operations. A frequency spectrum (e.g., range of frequencies) that the transceiver216uses to generate the radar signals can encompass frequencies between 1 and 400 gigahertz (GHz), between 4 and 100 GHz, between 1 and 24 GHz, between 2 and 4 GHz, between 50 and 70 GHz, between 57 and 64 GHz, or at approximately 2.4 GHz. In some cases, the frequency spectrum can be divided into multiple sub-spectrums that have similar or different bandwidths. The bandwidths can be on the order of 500 megahertz (MHz), 1 GHz, 2 GHz, and so forth. In some cases, the bandwidths are approximately 20% or more of a center frequency to implement an ultrawideband radar. Different frequency sub-spectrums may include, for example, frequencies between approximately 57 and 59 GHz, 59 and 61 GHz, or 61 and 63 GHz. Although the example frequency sub-spectrums described above are contiguous, other frequency sub-spectrums may not be contiguous. Multiple frequency sub-spectrums (contiguous or not) that have a same bandwidth may be used by the transceiver216to generate multiple radar signals, which are transmitted simultaneously or separated in time. In some situations, multiple contiguous frequency sub-spectrums may be used to transmit a single radar signal, thereby enabling the radar signal to have a wide bandwidth. The radar system102also includes one or more system processors218and at least one system medium220(e.g., one or more computer-readable storage media). The system medium220includes an electromagnetic-vector-sensor (EMVS) processing module222. The electromagnetic-vector-sensor processing module222enables the system processor218to process responses from the receive electromagnetic vector sensor214to detect a user, determine a position of the user, recognize a gesture performed by the user, measure a vital sign of the user, or perform collision avoidance. For example, the electromagnetic-vector-sensor processing module222can analyze samples of the received radar signals from the receive electromagnetic vector sensor214to estimate an angle to an object (or an angle to a portion of the user). In particular, the electromagnetic-vector-sensor processing module222can apply the least-squares principle and compute a cost function for a range of angles (e.g., azimuth and/or elevation) to generate information representative of a 2D image. A peak response within the 2D image can be used to estimate an angle to the object. Also, the electromagnetic-vector-sensor processing module222can determine a material composition of the object and/or classify the object. For example, the electromagnetic-vector-sensor processing module222can classify the object as a human or an inanimate object. In an example instance, the electromagnetic-vector-sensor processing module222can determine a polarimetric signature of the object (or an object reflection matrix) to determine reflection characteristics of the object. Based on these reflection characteristics, the electromagnetic-vector-sensor processing module222can classify the object. Additionally or alternatively, the electromagnetic-vector-sensor processing module222can detect and attenuate multipath interference or clutter within the received radar signals. By attenuating the interference, the radar system102can achieve a higher accuracy in estimating a position of the object and achieve a lower false-alarm rate. In an alternative implementation (not shown), the electromagnetic-vector-sensor processing module222is included within the computer-readable medium204and implemented by the computer processor202. This enables the radar system102to provide the smart device104raw data via the communication interface210such that the computer processor202can process the raw data for the radar-based application206. General operations of the radar system102are further described with respect toFIG.3. FIG.3illustrates an example operation of the radar system102. In the depicted configuration, the radar system102is implemented as a frequency-modulated continuous-wave radar. However, other types of radar architectures can be implemented, as described above with respect toFIG.2. In environment300, a user302is located at a particular slant range304from the radar system102. To detect the user302, the radar system102transmits a radar transmit signal306. At least a portion of the radar transmit signal306is reflected by the user302. This reflected portion represents a radar receive signal308. The radar system102receives the radar receive signal308and processes the radar receive signal308to extract data for the radar-based application206. As depicted, an amplitude of the radar receive signal308is smaller than an amplitude of the radar transmit signal306due to losses incurred during propagation and reflection. The radar transmit signal306includes a sequence of chirps310-1to310-C, where C represents a positive integer greater than one. The radar system102can transmit the chirps310-1to310-C in a continuous burst or transmit the chirps310-1to310-C as time-separated pulses. A duration of each chirp310-1to310-C can be on the order of tens or thousands of microseconds (e.g., between approximately 30 microseconds (μs) and 5 milliseconds (ms)), for instance. Individual frequencies of the chirps310-1to310-C can increase or decrease over time. In the depicted example, the radar system102employs a two-slope cycle (e.g., triangular frequency modulation) to linearly increase and linearly decrease the frequencies of the chirps310-1to310-C over time. The two-slope cycle enables the radar system102to measure the Doppler frequency shift caused by motion of the user302. In general, transmission characteristics of the chirps310-1to310-C (e.g., bandwidth, center frequency, duration, and transmit power) can be tailored to achieve a particular detection range, range resolution, or Doppler sensitivity for detecting one or more characteristics the user302or one or more actions performed by the user302. At the radar system102, the radar receive signal308represents a delayed version of the radar transmit signal306. The amount of delay is proportional to the slant range304(e.g., distance) from the radar system102to the user302. In particular, this delay represents a summation of a time it takes for the radar transmit signal306to propagate from the radar system102to the user302and a time it takes for the radar receive signal308to propagate from the user302to the radar system102. If the user302is moving, the radar receive signal308is shifted in frequency relative to the radar transmit signal306due to the Doppler effect. Similar to the radar transmit signal306, the radar receive signal308is composed of one or more of the chirps310-1to310-C. The multiple chirps310-1to310-C enable the radar system102to make multiple observations of the user302over a predetermined time period. The radar system102uses the transmit electromagnetic vector sensor212to transmit the radar transmit signal306. The radar system102also uses the receive electromagnetic vector sensor214to receive the radar receive signal308. Example implementations of the transmit electromagnetic vector sensor212and the receive electromagnetic vector sensor214are further described with respect toFIGS.4-1to6. FIG.4-1illustrates example components of the transmit electromagnetic vector sensor212or the receive electromagnetic vector sensor214. Each electromagnetic vector sensor212and214includes multiple antennas402. In example implementations, the transmit electromagnetic vector sensor212includes at least two antennas402(e.g., antennas402-1and402-2). The transmit electromagnetic vector sensor212can optionally include the antenna402-3. The receive electromagnetic vector sensor214includes at least three antennas402(e.g., antennas402-1,402-2, and402-3). The antennas402-1to402-3have respective polarizations404-1to404-3. The polarizations404-1to404-3can be unique polarizations that differ based on differences in the orientations, designs and/or operations of the antennas402-1to402-3. In an example implementation, the polarizations404-1to404-3are orthogonal (e.g., normal) to each other. For example, the polarization404-1can be a first linear polarization along a first axis (e.g., a vertical or Y axis), the polarization404-2can be a second linear polarization along a second axis (e.g., a horizontal or X axis) that is orthogonal to the first axis, and the polarization404-3can be a third linear polarization along a third axis (e.g., a Z axis) that is orthogonal to the first axis and the second axis. In other implementations, one or more of the polarizations404-1to404-3can be a circular polarization, such as a right-hand circular polarization (RHCP) or a left-hand circular polarization (LHCP). For example, the polarizations404-1and404-2can be orthogonal linear polarizations and the polarization404-3can be a circular polarization. Other polarizations are also possible, including elliptical polarizations. For implementations in which the polarization404-1represents a linear polarization, the antenna402-1can be implemented using a linear strip antenna408-1(e.g., a rectangular microstrip antenna or a rectangular patch antenna). The antenna402-1can also be implemented as a dipole antenna410-1. Likewise, the antenna402-2can be implemented as a linear strip antenna408-2or a dipole antenna410-2to provide another linear polarization as the polarization404-2. In some implementations, the dipole antennas410-1and410-2can be implemented as a type of linear strip antenna408. To enable the antennas402-1and402-2to have different polarizations, the antennas402-1and402-2can be oriented differently from each other. For example, the antenna402-1can have a length that is oriented along a vertical axis, and the antenna402-2can have a length that is oriented along a horizontal axis. In some implementations, the antennas402-1and402-2are oriented perpendicular to each other. For implementations in which the polarization404-3represents an additional linear polarization, the antenna402-3can be implemented using a loop antenna412(e.g., a ring-patch antenna). In some implementations, the loop antenna412is formed using a C-shaped conductor. In alternative implementations, the loop antenna412can have a rectangular shape, a circular shape, an elliptical shape, or a triangular shape. In general, a variety of different types of antennas can be used to implement one or more antennas of the transmit electromagnetic vector sensor212or the receive electromagnetic vector sensor214, including linear strip antennas, dipole antennas, loop antennas, patch antennas, or crossed-dipole antennas. The quantity of antennas within each of the transmit electromagnetic vector sensor212and the receive electromagnetic vector sensor214can be limited to three or less in order to enable the radar system to fit within space-constrained devices, such as the smart device104. However, other implementations of the radar system can include a transmit electromagnetic vector sensor212and/or a receive electromagnetic vector sensor with more than three antennas. As an example, the transmit electromagnetic vector sensor214or the receive electromagnetic vector sensor214can include a fourth antenna with a different antenna pattern, polarization, and/or orientation relative to the antennas402-1to402-3. An example arrangement of the antennas402-1to402-3of the transmit electromagnetic vector sensor or the receive electromagnetic vector sensor is further described with respect toFIG.4-2. FIG.4-2illustrates an example implementation of the transmit electromagnetic vector sensor212or the receive electromagnetic vector sensor214. In the depicted configuration, the transmit electromagnetic vector sensor212or the receive electromagnetic vector sensor214includes the linear strip antenna408-1, the linear strip antenna408-2, and the loop antenna412, which are disposed on a substrate414. In this manner, the linear strip antenna408-1, the linear strip antenna408-2, and the loop antenna412are coplanar (e.g., are disposed on a common plane). The linear strip antenna408-1has a length that is oriented along a vertical (Y) axis416(Y416). In this way, the polarization404-1of the linear strip antenna408-1is along the Y axis416. The linear strip antenna408-2has a length that is oriented along a horizontal (X) axis418(X418). As such, the polarization404-2of the linear strip antenna408-2is oriented along the X axis418. The linear-strip antennas408-1and408-2are offset from each other along the vertical axis416, the horizontal axis418, or a combination thereof. In the depicted configuration, the loop antenna412has a C-shaped pattern. In some implementations, a dimension of the loop antenna412along the vertical axis416can be less than or equal to the length of the linear strip antenna408-1. Also, another dimension of the loop antenna412along the horizontal axis418can be less than or equal to the length of the linear strip antenna408-2. The polarization404-3of the loop antenna412is along a Z axis420, which is orthogonal to the Y axis416and the X axis418. If the loop antenna412has relatively straight sides, these sides can be oriented at approximately a +/−45 degree angle. This orientation can reduce coupling between portions of the loop antenna412and the linear strip antennas408-1and408-2. The loop antenna412can also be positioned in a manner that reduces an overall footprint of the transmit electromagnetic vector sensor212or the receive electromagnetic vector sensor214. For example, the loop antenna412and the linear strip antenna408-2can be positioned on a same side of the linear strip antenna408-1(e.g., on a right side of the linear strip antenna408-1). Also, the loop antenna412and the linear strip antenna408-1can be positioned on a same side of the linear strip antenna408-2(e.g., on a left side of the linear strip antenna408-2). The positioning of the loop antenna412can also be further explained based on axes that intersect the linear strip antennas408-1and408-2. Consider a first axis that intersects a center of the linear strip antenna408-1and is parallel to the horizontal axis418. Also consider a second axis that intersects a center of the linear strip antenna408-2and is parallel to the vertical axis416. InFIG.4-2, an intersection of the first axis and the second axis indicates a general position of the loop antenna412. The transmit electromagnetic vector sensor212and the receive electromagnetic vector sensor214can be implemented together on a common plane or on a same substrate414, as further described with respect toFIGS.5-1to6. FIG.5-1illustrates an example implementation of the transmit electromagnetic vector sensor212and an example implementation of the receive electromagnetic vector sensor214. In the depicted configuration, the transmit electromagnetic vector sensor212includes the linear strip antenna408-1and the linear strip antenna408-2. The receive electromagnetic vector sensor214includes the linear strip antenna408-3, the linear strip antenna408-4, and the loop antenna412. In the depicted configuration, the loop antenna412is positioned between the linear strip antennas408-1and408-3along the horizontal axis418. Also, the loop antenna412is positioned between the linear strip antennas408-2and408-4along the vertical axis416. In this example implementation, the transmit electromagnetic vector sensor212and the receive electromagnetic vector sensor214can have a combined footprint of approximately three millimeters by three millimeters. In other words, a distance between the linear strip antennas408-1and408-3is approximately three millimeters, and a distance between the linear strip antennas408-2and408-4is approximately three millimeters. The compact design of the transmit electromagnetic vector sensor212and the receive electromagnetic vector sensor214ofFIG.5-1can allow the radar system102to fit within space-constrained devices, such as the smart device104. For devices that have available space, the transmit electromagnetic vector sensor212can be implemented with an additional antenna and/or a distance between the transmit electromagnetic vector sensor212and the receive electromagnetic vector sensor214can be increased to reduce cross-coupling, as further described with respect toFIGS.5-2to6. FIG.5-2illustrates an example implementation of the transmit electromagnetic vector sensor212and an example implementation of the receive electromagnetic vector sensor214positioned side-by-side with similar orientations502-1. In the depicted configuration, the antennas402of the transmit electromagnetic vector sensor212are disposed on a first portion of the substrate414(e.g., a left portion of the substrate414). The antennas402of the receive electromagnetic vector sensor214are disposed on a second portion of the substrate414(e.g., a right portion of the substrate414). The antennas402of the transmit electromagnetic vector sensor212are coplanar with the antennas402of the receive electromagnetic vector sensor214. In this example, the transmit electromagnetic vector sensor212and the receive electromagnetic vector sensor214are similar to the implementation shown inFIG.4-2. In particular, the transmit electromagnetic vector sensor212includes the linear strip antennas408-1and408-2. The transmit electromagnetic vector sensor212also includes the loop antenna412-1. The receive electromagnetic vector sensor214includes the linear strip antennas408-3and408-4. The receive electromagnetic vector sensor214also includes the loop antenna412-2. The linear strip antennas408-1and408-3are approximately parallel to each other and are parallel to the vertical axis416. Also, the linear strip antennas408-2and408-4are approximately parallel to each other and are parallel to the horizontal axis418. The transmit electromagnetic vector sensor212and the receive electromagnetic vector sensor214have a same orientation502-1. Based on the orientation502-1, the linear strip antenna408-1is positioned on a left side of the loop antenna412-1along the horizontal axis418. Also the linear strip antenna408-2is positioned on a bottom side of the loop antenna412-1along the vertical axis416. Likewise, the linear strip antenna408-3is positioned on a left side of the loop antenna412-2and the linear strip antenna408-4is positioned on a bottom side of the loop antenna412-2. As such, the loop antenna412-1is generally positioned between the linear strip antennas408-1and408-3along the horizontal axis418. Also, the linear strip antenna408-3is generally positioned between the loop antennas412-1and412-2along the horizontal axis418. Lengths of the linear strip antennas408-2and408-4can be orientated along a same horizontal axis418. In the example implementation ofFIG.5-2, the transmit electromagnetic vector sensor212and the receive electromagnetic vector sensor214can have a combined footprint of approximately three millimeters by five millimeters. In other words, a distance between a furthest edge of the linear strip antenna408-2or408-4and a furthest edge of the linear strip antenna408-1or408-3along the vertical axis416is approximately three millimeters. Also, a distance between a furthest edge of the linear strip antenna408-1and a furthest edge of the linear strip antenna408-4along the horizontal axis418is approximately five millimeters. In this example, both the transmit electromagnetic vector sensor212and the receive electromagnetic sensor214are arranged in a same orientation502-1. While it may be easier to manufacture the transmit electromagnetic vector sensor212and the receive electromagnetic sensor214with the same orientation502-1, the cross-coupling between the linear strip antennas408-1and408-3and the cross-coupling between the linear strip antennas408-2and408-4can be reduced by implementing the transmit electromagnetic vector sensor212and the receive electromagnetic vector sensor214with different orientations, as further described with respect toFIG.5-3. FIG.5-3illustrates an example implementation of the transmit electromagnetic vector sensor212and an example implementation of the receive electromagnetic vector sensor214positioned side-by-side in different orientations502-1and502-2, respectively. In this example, the receive electromagnetic vector sensor214has the orientation502-2, which differs from the orientation502-1of the transmit electromagnetic vector sensor212. In one aspect, the orientation502-2is rotated approximately 180 degrees relative to the orientation502-1. Based on the orientation502-2, the linear strip antenna408-3is positioned on a right side of the loop antenna412-2along the horizontal axis418. Also the linear strip antenna408-4is positioned on a top side of the loop antenna412-2along the vertical axis416. As such, the loop antennas412-1and412-2are generally positioned between the linear strip antennas408-1and408-3along the horizontal axis418. Also, the loop antennas412-1and412-2are generally positioned between the linear strip antennas408-2and408-4along the vertical axis416. In general, the linear strip antennas408-1and408-3are positioned on opposite sides of the substrate414(e.g., a left side and a right side). Consider a vertical axis416that intersects a center of the linear strip antenna408-2or408-4. In this case, the linear strip antennas408-2and408-4are positioned on opposite sides of the vertical axis416. The linear strip antennas408-2and408-4are also positioned on opposite sides of the substrate414(e.g., a top side and a bottom side). Consider a horizontal axis418that intersects a center of the linear strip antenna408-1or408-3. In this case, the linear strip antennas408-2and408-4are positioned on opposite sides of the horizontal axis418. By having the transmit electromagnetic vector sensor212in the orientation502-1and the receive electromagnetic vector sensor214in the orientation502-2, a distance between the linear strip antennas408-1and408-3and another distance between the linear strip antennas408-2and408-4can be larger relative to the distances shown inFIG.5-2. With larger distances between antennas associated with similar polarizations, the radar system102can reduce cross-coupling between the transmit electromagnetic vector sensor212and the receive electromagnetic vector sensor214. In the example implementation ofFIG.5-3, the transmit electromagnetic vector sensor212and the receive electromagnetic vector sensor214can have a combined footprint of approximately three millimeters by five millimeters. In other words, a distance between a furthest edge of the linear strip antenna408-2and408-4along the vertical axis416is approximately three millimeters. Also, a distance between a furthest edge of the linear strip antenna408-1and a furthest edge of the linear strip antenna408-3along the horizontal axis418is approximately five millimeters. InFIGS.5-2and5-3, the transmit electromagnetic vector sensor212and the receive electromagnetic vector sensor214are positioned next to each other along the horizontal axis418. In this manner, the antennas402of the receive electromagnetic vector sensor214are offset from the antennas402of the transmit electromagnetic vector sensor212along the horizontal axis418. In other implementations, the antennas402of the receive electromagnetic vector sensor214can also be offset from the antennas402of the transmit electromagnetic vector sensor212along the vertical axis416, as further described with respect toFIGS.5-4and5-5. FIG.5-4illustrates an example implementation of the transmit electromagnetic vector sensor212and an example implementation of the receive electromagnetic vector sensor214offset from each other with similar orientations502-1. In the depicted configuration, the antennas402of the transmit electromagnetic vector sensor212are disposed on a first portion of the substrate414(e.g., a bottom-left portion of the substrate414). The antennas402of the receive electromagnetic vector sensor214are disposed on a second portion of the substrate414(e.g., a top-right portion of the substrate414). The antennas402of the transmit electromagnetic vector sensor212are coplanar with the antennas402of the receive electromagnetic vector sensor214. In this example, the antennas402of the receive electromagnetic vector sensor214are offset along the vertical axis416and the horizontal axis418relative to the antennas402of the transmit electromagnetic vector sensor212. In the example implementation ofFIG.5-4, the transmit electromagnetic vector sensor212and the receive electromagnetic vector sensor214can have a combined footprint of approximately five millimeters by five millimeters. In other words, a distance between a furthest edge of the linear strip antenna408-2and a furthest edge of the linear strip antenna408-3along the vertical axis416is approximately five millimeters. Also, a distance between a furthest edge of the linear strip antenna408-1and a furthest edge of the linear strip antenna408-4along the horizontal axis418is approximately five millimeters. In this example, both the transmit electromagnetic vector sensor212and the receive electromagnetic sensor214are arranged in a same orientation502-1. While it may be easier to manufacture the transmit electromagnetic vector sensor212and the receive electromagnetic sensor214with the same orientation502-1, the cross-coupling between the linear strip antennas408-1and408-3and the cross-coupling between the linear strip antennas408-2and408-4can be reduced by implementing the transmit electromagnetic vector sensor212and the receive electromagnetic vector sensor214with different orientations, as further described with respect toFIG.5-5. FIG.5-5illustrates an example implementation of the transmit electromagnetic vector sensor212and an example implementation of the receive electromagnetic vector sensor214offset from each other with different orientations502-1and502-2, respectively. In this example, the receive electromagnetic vector sensor214has the orientation502-2, which differs from the orientation502-1of the transmit electromagnetic vector sensor212. As described above with respect toFIG.5-3, the orientation502-2is rotated approximately180degrees relative to the orientation502-1. In the example implementation ofFIG.5-5, the transmit electromagnetic vector sensor212and the receive electromagnetic vector sensor214can have a combined footprint of approximately five millimeters by five millimeters. In other words, a distance between a furthest edge of the linear strip antenna408-2and a furthest edge of the linear strip antenna408-4along the vertical axis416is approximately five millimeters. Also, a distance between a furthest edge of the linear strip antenna408-1and a furthest edge of the linear strip antenna408-3along the horizontal axis418is approximately five millimeters. The example dimensions given forFIGS.5-1to5-5can be applicable to a radar system102that utilizes frequencies between approximately 50 and 70 GHz. In general, a footprint of the transmit electromagnetic vector sensor212and the receive electromagnetic vector sensor214varies based on the frequencies the radar system102is designed to use. Other implementations of the radar system102, for instance, can utilize larger frequencies (e.g., frequencies greater than 70 GHz) to further decrease the footprint of the transmit electromagnetic vector sensor212and the receive electromagnetic vector sensor214for other space-constrained devices. Alternatively, if a smart device104has additional available space, the radar system102can be designed to utilize smaller frequencies (e.g., frequencies less than 50 GHz), which can increase the footprint of the transmit electromagnetic vector sensor212and the receive electromagnetic vector sensor214. In the example implementations shown inFIGS.5-1to5-5, the radar system102includes one transmit electromagnetic vector sensor212and one receive electromagnetic vector sensor214. Other implementations of the radar system102can include multiple transmit electromagnetic vector sensors212and/or multiple receive electromagnetic vector sensors214, as further described with respect toFIG.6. FIG.6illustrates example implementations of multiple transmit electromagnetic vector sensors212-1and212-2and multiple receive electromagnetic vector sensors214-1and214-2. In the depicted configuration, the antennas402of the transmit electromagnetic vector sensor212-1are disposed on a first portion of the substrate414(e.g., a bottom-left portion of the substrate414). The antennas402of the transmit electromagnetic vector sensor212-2are disposed on a second portion of the substrate414(e.g., a bottom-right portion of the substrate414). The antennas402of the receive electromagnetic vector sensor214-1are disposed on a third portion of the substrate414(e.g., a top-left portion of the substrate414). The antennas402of the receive electromagnetic vector sensor214-2are disposed on a fourth portion of the substrate414(e.g., a top-right portion of the substrate414). In this example, the transmit electromagnetic vector sensor212-1and the receive electromagnetic vector sensor214-1have the orientation502-1. Also, the transmit electromagnetic vector sensor212-2and the receive electromagnetic vector sensor214-2have the orientation502-2. FIG.7-1illustrates an example implementation of the transceiver216. In the depicted configuration, the transceiver216includes at least one transmitter702and at least one receiver704. The transmitter702is coupled to the transmit electromagnetic vector sensor212, and the receiver704is coupled to the receive electromagnetic vector sensor214. The transmitter702is also coupled to the receiver704. Although not explicitly shown, the transmitter702and/or the receiver704can be coupled to the system processor218. The transmit electromagnetic vector sensor212includes at least two antennas706-1to706-N, where N represents a positive integer. The antennas706-1to706-N can be implemented using the antennas402-1to402-3ofFIG.4-1. The receive electromagnetic vector sensor214includes at least three antennas708-1to708-M, where M represents a positive integer. The antennas708-1to708-M can be implemented using the antennas402-1to402-3ofFIG.4-1. In this example implementation, the transmitter702includes at least two transmit channels710-1to710-N. Each transmit channel710-1to710-N can include components such as a voltage-controlled oscillator, a power amplifier, a phase shifter, a mixer, or some combination thereof. The transmit channels706-1to706-N are respectively coupled to the antennas706-1to706-N of the transmit electromagnetic vector sensor212. For example, the transmit channel710-1is coupled to the antenna706-1, and the transmit channel710-N is coupled to the antenna706-N. The receiver704includes at least three receive channels712-1to712-M. Each receive channel712-1to712-M can include components such as a low-noise amplifier, a phase shifter, a mixer, a filter, and an analog-to-digital converter. The receive channels712-1to712-M are respectively coupled to the antennas708-1to708-M During transmission, the transmit channels710-1to710-N generate respective radar transmit signals306-1to306-N. The radar transmit signals306-1to306-N have waveforms that can be similar or different. For example, the radar transmit signals306-1to306-N can have similar or different frequencies, phases, amplitudes, or modulations. The antennas706-1to706-N accept the radar transmit signals306-1to306-N from the transmit channels710-1to710-N and transmit the radar transmit signals306-1to306-N. In various implementations, at least a portion of the radar transmit signals306-1to306-N can be transmitted during a same time interval. Alternatively, the radar transmit signals306-1to306-N can be transmitted during different time intervals. During reception, each antenna708-1to708-M receives a radar receive signal308-1to308-M. Each of the radar receive signals308-1to308-M can include a version of at least one of the radar transmit signals306-1to306-N, which is reflected by an object (e.g., the user302ofFIG.3). The receive channels712-1to712-M accept the radar receive signals308-1to308-M from the antennas708-1to708-M. The receive channels712-1to712-M can perform operations such as amplification, phase shifting, filtering, downconversion, demodulation, and analog-to-digital conversion. In general, the receive channels712-1to712-M generate processed versions of the radar receive signals308-1to308-M, which are provided to the electromagnetic-vector-sensor processing module222. InFIG.7-1, each antenna706-1to706-N of the transmit electromagnetic vector sensor212is coupled to a corresponding transmit channel710-1to710-M. Likewise, each antenna708-1to708-M of the receive electromagnetic vector sensor214is coupled to a corresponding receive channel712-1to712-M. By having dedicated channels, the transmit electromagnetic vector sensor212can transmit multiple radar transmit signals306-1to306-N during a first time interval and the receive electromagnetic vector sensor214can receive multiple radar receive signals308-1to308-M during a second time interval. Other implementations of the radar system102can conserve space by implementing a transceiver216with fewer channels, an example of which is further described below with respect toFIG.7-2. FIG.7-2illustrates another example implementation of the transceiver216. In the depicted configuration, the transmitter702of the transceiver216includes fewer transmit channels710than available antennas706-1to706-N within the transmit electromagnetic vector sensor212. In this case, the transmitter702includes one transmit channel710-1. Additionally or alternatively, the receiver704of the transceiver216includes fewer receive channels712than available antennas708-1to708-M within the receive electromagnetic vector sensor214. In this case, the receiver704includes one receive channel712-1. The transceiver216also includes a switching circuit714, which enables time sharing of the transmit channel710-1by the antennas706-1to706-N and enables time sharing of the receive channel712-1by the antennas708-1to708-M. The switching circuit714selectively connects the transmit channel710-1to different ones of the antennas706-1to706-N. The switching circuit714also selectively connects the receive channel712-1to different ones of the antennas708-1to708-M. In some implementations, the switching circuit714connects the receive channel712-1to different ones of the antennas708-1to708-M while connecting the transmit channel710-1to one of the antennas706-1to706-N. Although the present teachings are not so limited, the implementations ofFIGS.5-1to6provide several desirable advantageous characteristics. One such advantage is the ability to perform radar sensing using frequencies associated with millimeter wavelengths while having a footprint that integrates well into smartphones or portable consumer devices. With radar-sensing capabilities, these devices can support a wide variety of applications, including gesture recognition, presence detection, vital-sign monitoring, and/or collision avoidance. The frequencies associated with millimeter waves can include frequencies between approximately 50 and 70 GHz. The compact design of the transmit electromagnetic vector sensor212and the receive electromagnetic vector sensor214can also enable the transceiver216and the antennas of the transmit electromagnetic vector sensor212and the receive electromagnetic vector sensor214to be implemented on a same integrated circuit. In some aspects, this can reduce power consumption in the smart device104and avoid complicated routing compared to other implementations that use multiple integrated circuits. The multiple polarizations and antenna patterns of the transmit electromagnetic vector sensor212and the receive electromagnetic vector sensor214enable the radar system102to observe a sufficiently large field-of-view for a variety of radar-based applications without introducing significant cross-coupling interference. The techniques of applying electromagnetic vector sensors for radar sensing also enables the radar system102to avoid time-consuming aspects of beam steering or additional complexities associated with beamforming. Example Method FIG.8depicts an example method800for performing operations of electromagnetic vector sensors of a smart-device-based radar system. Method800is shown as sets of operations (or acts) performed but not necessarily limited to the order or combinations in which the operations are shown herein. Further, any of one or more of the operations may be repeated, combined, reorganized, or linked to provide a wide array of additional and/or alternate methods. In portions of the following discussion, reference may be made to the environment100-1to100-6ofFIG.1, and entities detailed inFIG.2, reference to which is made for example only. The techniques are not limited to performance by one entity or multiple entities operating on one device. At802, a first radar transmit signal having a first linear polarization along a first axis is transmitted using a first antenna of a transmit electromagnetic vector sensor. For example, the antenna402-1of the transmit electromagnetic vector sensor212transmits a first radar transmit signal306-1having a linear polarization as the polarization404-1. The polarization404-1can represent a vertical linear polarization, which is oriented along the vertical axis416. The antenna402-1can be a linear strip antenna408-1or a dipole antenna410-1. At804, a second radar transmit signal having a second linear polarization along a second axis that is orthogonal to the first axis is transmitted using a second antenna of the transmit electromagnetic vector sensor. For example, the antenna402-2of the transmit electromagnetic vector sensor212transmits a second radar transmit signal306-2having a linear polarization as the polarization404-2. The polarization404-2can represent a horizontal linear polarization, which is oriented along the horizontal axis418. The antenna402-2can be a linear strip antenna408-2or a dipole antenna410-2. At806, a first radar receive signal having the first linear polarization is received using a first antenna of a receive electromagnetic vector sensor. For example, the antenna402-1of the receive electromagnetic vector sensor214receives a first radar receive signal308-1having the polarization404-1. The antenna402-1can also be a linear strip antenna408-1or a dipole antenna410-2. At808, a second radar receive signal having the second linear polarization is received using a second antenna of the receive electromagnetic vector sensor. For example, the antenna402-2of the receive electromagnetic vector sensor214receives a second radar receive signal308-2having the polarization404-2. The antenna402-2can also be a linear strip antenna408-1or a dipole antenna410-2. At810, a third radar receive signal having a third polarization that is different than the first linear polarization and the second linear polarization is received using a third antenna of the receive electromagnetic vector sensor. The first radar receive signal, the second radar receive signal, and the third radar receive signal each comprise reflected versions of at least one of the first radar transmit signal or the second radar transmit signal. For example, the antenna402-3of the receive electromagnetic vector sensor214receives a third radar receive signal308-3having the third polarization404-3. The third polarization404-3can be a third linear polarization along a third axis (e.g., the Z axis420) that is orthogonal to the vertical axis416and the horizontal axis418. Alternatively, the third polarization404-3can be a circular polarization (e.g., a right-hand circular polarization or a left-hand circular polarization). The antenna402-3can be a loop antenna412. The first radar receive signal308-1, the second radar receive signal308-2, and the third radar receive signal308-3each comprise reflected versions of at least one of the first radar transmit signal306-1or the second radar transmit signal306-2. For example, the first radar receive signal308-1can include portions of the first radar transmit signal306-1and/or portions of the second radar transmit signal306-2with the linear polarization404-1. The second radar receive signal308-2can include portions of the first radar transmit signal306-1and/or portions of the second radar transmit signal306-2with the linear polarization404-2. Also, the third radar receive signal308-3can include portions of the first radar transmit signal306-1and/or portions of the second radar transmit signal306-2with the third polarization404-3. Example Computing System FIG.9illustrates various components of an example computing system900that can be implemented as any type of client, server, and/or computing device as described with reference to the previousFIG.2to implement electromagnetic vector sensors for a smart-device-based radar system. The computing system900includes communication devices902that enable wired and/or wireless communication of device data904(e.g., received data, data that is being received, data scheduled for broadcast, or data packets of the data). The communication devices902or the computing system900can include one or more radar systems102. In this example, the radar system102includes the transmit electromagnetic vector sensor212and the receive electromagnetic vector sensor ofFIGS.4-1to5-4. The device data904or other device content can include configuration settings of the device, media content stored on the device, and/or information associated with a user302of the device. Media content stored on the computing system900can include any type of audio, video, and/or image data. The computing system900includes one or more data inputs906via which any type of data, media content, and/or inputs can be received, such as human utterances, the radar-based application206, user-selectable inputs (explicit or implicit), messages, music, television media content, recorded video content, and any other type of audio, video, and/or image data received from any content and/or data source. The computing system900also includes communication interfaces908, which can be implemented as any one or more of a serial and/or parallel interface, a wireless interface, any type of network interface, a modem, and as any other type of communication interface. The communication interfaces908provide a connection and/or communication links between the computing system900and a communication network by which other electronic, computing, and communication devices communicate data with the computing system900. The computing system900includes one or more processors910(e.g., any of microprocessors, controllers, and the like), which process various computer-executable instructions to control the operation of the computing system900and to enable techniques for, or in which can be embodied, gesture recognition in the presence of saturation. Alternatively or in addition, the computing system900can be implemented with any one or combination of hardware, firmware, or fixed logic circuitry that is implemented in connection with processing and control circuits which are generally identified at912. Although not shown, the computing system900can include a system bus or data transfer system that couples the various components within the device. A system bus can include any one or combination of different bus structures, such as a memory bus or memory controller, a peripheral bus, a universal serial bus, and/or a processor or local bus that utilizes any of a variety of bus architectures. The computing system900also includes a computer-readable media914, such as one or more memory devices that enable persistent and/or non-transitory data storage (i.e., in contrast to mere signal transmission), examples of which include random access memory (RAM), non-volatile memory (e.g., any one or more of a read-only memory (ROM), flash memory, EPROM, EEPROM, etc.), and a disk storage device. The disk storage device may be implemented as any type of magnetic or optical storage device, such as a hard disk drive, a recordable and/or rewriteable compact disc (CD), any type of a digital versatile disc (DVD), and the like. The computing system900can also include a mass storage media device (storage media)916. The computer-readable media914provides data storage mechanisms to store the device data904, as well as various device applications918and any other types of information and/or data related to operational aspects of the computing system900. For example, an operating system920can be maintained as a computer application with the computer-readable media914and executed on the processors910. The device applications918may include a device manager, such as any form of a control application, software application, signal-processing and control module, code that is native to a particular device, a hardware abstraction layer for a particular device, and so on. The device applications918also include any system components, engines, or managers to perform radar sensing using electromagnetic vector sensors. CONCLUSION Although techniques using, and apparatuses including, electromagnetic vector sensors for a smart-device-based radar system have been described in language specific to features and/or methods, it is to be understood that the subject of the appended claims is not necessarily limited to the specific features or methods described. Rather, the specific features and methods are disclosed as example implementations of electromagnetic vector sensors for a smart-device-based radar system.
56,831
11860295
DETAILED DESCRIPTION FIG.1shows a lower part110of an electronics enclosure100. Electronics enclosure100has an upper part130shown inFIG.11, having the form of a lid, which is configured to sealingly engage with lower part110to form the complete enclosure, as shown inFIG.12. Lower part110in the configuration shown inFIG.1has the form of a rectangular box having a generally planar lower wall111, generally planar sidewalls111A,111B,111C, and111D, which project vertically upwards from the plane of lower wall111so that an interior space112of enclosure100is partly enclosed on five sides. The upper end of each of sidewalls111A,111B,111C, and111D is recessed on an outer edge to provide a sealing rim over which the upper part130can sealingly engage. The upper part130has a corresponding projection formed on a lower surface thereof to be accommodated in recess113, and may provide space for a seal, such as an O-ring seal or a bead of sealant, to be located between the lower part110and the upper part130. The upper part130may, for example, be secured to lower part110by adhesive, or may be provided with fixtures such as locking clamps or screws to engage the upper part130to the lower part110. The upper part130may be a generally flat structure as shown, or may itself partly enclosed interior space112, for example by having a corresponding shape to the shape of the lower part110with an upper wall and sidewalls which continue, in the assembled state, from the sidewalls111A,111B,111C and111D of lower part110. As shown inFIG.1, one of the sidewalls111D is provided with connector assembly114, which provides an internal terminal portion114A a having conductors which extend through sidewall111D to receptacle114B. The conductors of terminal portion114a pass through sidewall111D by means of appropriate sealing, for example by being in sealing contact with the material of sidewall11D, or otherwise by passing through a seal block such as a rubber block which itself seals with sidewall111D. Thereby, external electrical signals may be introduced to and obtained from electronics mounted at the interior of enclosure100while maintaining a sealed state of enclosure100. Also provided in enclosure100are fixing points115which provide anchors for attaching, for example, an electronic component such as a printed circuit board (PCB). In the configuration ofFIG.1, fixing points115extend from an interior surface of lower wall111, but may also be provided at other locations such as on each of sidewalls111A,111B,111C,111D. For example, fixing points115may be reinforced portions having a blind hole formed therein, into which a screw may be tapped. Arranging suitable holes in the electronic component over fixing points115and introducing a screw through the hole to the fixing point may be used to secure the electronic component. However, other means of securing electronic components within enclosure100may be provided by, for example, the use of adhesives, press-fit studs, retention clips or other fixtures. When upper part130is sealingly engaged with lower part110of enclosure100, there is a need for pressure relief for air (or other gas) which may be present in space112inside enclosure100to avoid pressure difference between an interior of the enclosure and an exterior of the enclosure from damaging the enclosure or the components housed therein. Accordingly, in the configuration ofFIG.1, an aperture117is provided communicating between the interior space112and a relief chamber118, which itself communicates with an exterior of the enclosure100. In the configuration ofFIG.1, aperture117is provided to a partition formed to extend from lower part110in the form of pedestal116which encloses the relief chamber118. Relief chamber118communicates with the exterior of enclosure100through openings119and120. Relief chamber118and openings119and120are not visible inFIG.1but are shown in the vertical cross-section ofFIG.2and the horizontal cross-section ofFIG.3. As can be seen inFIG.2, pedestal116has a wall portion116A that extends vertically from the plane of lower wall111and a roof portion116B that extends horizontally from wall portion116A so as to enclose relief chamber118. In the configuration shown inFIG.1, roof portion116B is circular in periphery and wall portion116A extends circumferentially around the periphery of roof portion116B. However, other configurations are possible, in which roof portion116B may be, for example, domed, and/or wall portion116A may be formed in a generally curved or in a polygonal shape, for example by a series of wall portions extending in a straight-line or curved configuration with angles defined therebetween. So, for example, pedestal116may, for example, be rectangular or hexagonal or octagonal outlined in a plane. Extending between pedestal116and adjacent sidewalls111B and111D are tunnel structures121and122. As shown inFIG.3, tunnel structures121and122define tunnels123and124, which extend from relief chamber118, and which respectively terminate at openings119and120.FIG.4shows an alternative view of lower part110from an underside direction, in which lower surface111is visible, as well as openings119and120formed in respective sidewalls111B and111D. In the configuration exemplified inFIG.1, an overall passage for gas exists between space112and the exterior of the enclosure100through, sequentially, aperture117, relief chamber118, tunnels123and124and openings119and120, whereby an overpressure or under-pressure inside enclosure100may be relieved. To prevent ingress of moisture and/or dust into space112defined inside enclosure100, membrane116D is provided to cover aperture117. Membrane116D is gas-permeable but liquid-impermeable and may be formed of a breathable material such as GORE™ membrane. Such breathable materials known in the art and may be selected according to requirements. In the configuration ofFIG.1, underside of membrane116D is sealingly adhered by means of, for example, a peripheral adhesive bead to an upper surface of roof portion116B so as to cover aperture117. Membrane116D thereby prevents ingress of moisture and/or dust between relief chamber118and space112through aperture117. In the embodiment ofFIG.1, a recess116C having the same size and shape as membrane116D is formed on roof portion116B to accommodate membrane116D, but in other configurations membrane116D may be affixed to a sufficiently flat surface of roof portion116B without any recess. In the configuration ofFIG.1, recess116C matches the peripheral shape of membrane116D. However, there is no limitation on the pro shape of membrane116D or recess116C, and either or both of these may be circular, rectangular, octagonal, hexagonal, or of irregular outline. By providing the configuration ofFIG.1, in which a relief chamber118is arranged between interior space112and an exterior of enclosure100, communicating by means of aperture117and at least one of openings119and120, the opportunity for an incident jet of liquid to impinge directly on membrane116D is reduced. Therefore, the forces experienced by membrane116D resulting from, for example, a cleaning process using liquid jets may be reduced. As a result, the seal provided by membrane may be more durable and the enclosure is made more resistant against incoming moisture. InFIG.1, a through-axis of aperture117, which may be regarded as being a direction normal to a cross-section of aperture117, is offset, both laterally and in angular direction, from each through-axis of openings119and120. By offsetting the through-axes of openings119and/or120and aperture117, a jet of liquid which is incident on enclosure100and aligned with opening119or120will not travel directly through aperture117. Therefore, the force of the incident liquid on membrane116D may be reduced, and the ability of membrane116D to resist the incident liquid may be improved. Additionally, by providing tunnels123and124extending from relief chamber1182and exterior of enclosure100, the possibility for an incident jet of liquid to enclosure100can directly reach the aperture117is reduced. Similarly, by arranging aperture117in roof portion116B of pedestal116, possibility for an incident jet of liquid to reach aperture117may further be reduced. Thus, by such a configuration exemplified in ofFIG.1, there is no jet of liquid incident from outside enclosure100which can directly strike membrane116D. In the configuration ofFIG.1, two openings119,120, with associated tunnels123,124are provided which communicate with relief chamber118. Providing two such openings from the exterior of enclosure100allows for liquid, which has entered through one opening, easily to drain through the other opening. Accordingly, a jet of liquid through one opening will not lead to a build-up of liquid in relief chamber118and hence an undesirable increase of inward pressure against membrane116D. By such a configuration, the ability of the enclosure to resist incident liquid can further be improved. Moreover, in the configuration ofFIG.1, openings119,120are arranged with through-axes which are angularly offset one to another. Specifically, in the configuration ofFIG.1, openings119,120are located on different sidewalls, particularly adjacent sidewalls111D,111B. Here, adjacent sidewalls111D,111B extend at an inclination one to the other, shown as a perpendicular inclination. Such a configuration promotes more effective drainage of liquid from relief chamber118. As shown inFIG.1, providing the through-axes of openings119,120to be perpendicular particularly efficiently promotes drainage, especially if the enclosure100may, in use, be installed at different orientations. Also, in the configuration ofFIG.1, tunnels123,124which terminate at openings119,120, extend at an angle one to another, and in particular are arranged to extend linearly in directions perpendicular one to another. Such a configuration permits the enclosure100to be installed in a variety of orientations and to retain the ability effectively to drain liquid which has entered into relief chamber118out from relief chamber118by the force of gravity. However, in other configurations, tunnels123and124need not be at right angles to one another and may be arranged at other angles. Moreover, although tunnels123and124are depicted as being formed in a straight line with a constant cross-section, in other configurations the cross-section can be narrowed or expanded along each tunnel. Further, the path of tunnels123and124may each be made curving or labyrinthine in order further to resist the incursion of liquid. The above disclosure has been exemplified in one configuration shown inFIGS.1to4, but many variations are possible without departing from the advantageous functionality and associated structure disclosed above. Reference will now be made to exemplary further variant embodiments shown respectively inFIGS.5to6andFIGS.7to10. Where elements have not been described or labelled in connection these embodiments, is to be understood that like elements as disclosed in connection with the embodiment ofFIG.1are present with corresponding structure and function. For example, as shown inFIGS.5and6, in a variant configuration, enclosure310may have a lower part320having openings219and220. Openings219and220, which terminate tunnels223and224, may have different cross-sections. For example, as shown inFIGS.5and6, the cross-section of tunnel223may be rectangular and may have constant dimensions, while the cross-section of tunnel224and associated opening220may be round and may taper, for example toward relief chamber228. Other configurations of cross-section are possible, without limitation. In another exemplary configuration, as shown inFIGS.7to10, lower part320is provided with one opening319arranged at one sidewall311D of enclosure310and connected to relief chamber218by tunnel323. Lower part320is also provided with at least one second opening220, here shown as a group of second openings220, formed in lower surface311of lower part320and extending directly between lower surface311and relief chamber328. In such a configuration, aperture317formed in roof portion316B of pedestal316may still be adequately protected from direct incidence of liquid jets since, for some applications, the enclosure may be mounted with openings220facing another surface, such as a surface to which lower part320is fixed and by which it is supported. Moreover, aperture317may be protected from direct incidence of liquid jets by arranging the one or more openings220with through-axes offset from a through-axis of aperture117. In the configuration ofFIGS.7to10, openings220have through-axes which are laterally offset from a through-axis of aperture317, for example. In this way, systems, assemblies, and apparatus described herein meet IP6K6K according to IEC Standard 60529. For example, an example assembly is configured to meet IP6K6K according to IEC Standard 60529 and the assembly is configured to be mounted in a vehicle Additional or alternative protection against direct incidence of liquid jets also be provided by implementing each opening as a group of smaller holes, by covering each opening with a grid with sufficiently small mesh spacing, in each case so as to disrupt or reduce the force of any incident liquid jet. Moreover, in the above disclosure has been made of a particular configuration which, having certain structural features, may exhibit certain functionality. However, as the skilled person will recognise, elements of the above-described structure may be modified or adapted or substituted by known equivalents without affecting the essential functionality. For example, in the above embodiments, example has been used of an essentially rectangular box-shaped enclosure, but the present configuration is also applicable to box-type enclosures having outline in square, hexagonal, octagonal, or another polygonal shape. Similarly, the present disclosure is applicable to enclosures which may have a circular outline, and which may have domed or otherwise non-planar upper and/or lower surfaces. For example, the present disclosure can be equivalently applied in a drum-like enclosure having a planar lower surface and a dome-shaped upper surface, and wherein openings are provided either to locations in the circumferential wall of the drum or on at one location on the circumferential wall of the drum and on a planar lower surface of the drum. Moreover, the present disclosure is also applicable to configurations in which only one exterior opening is provided, or whether at a side surface or a lower surface, or another surface of the enclosure. The above disclosure has been exemplified with respect to a two-part enclosure having a lower part and an upper part which sealingly engages with the lower portion as shown inFIG.12. However, other configurations are possible, and the enclosure can be configured, for example, to have two lateral half-portions which each comprise an upper wall, lower wall and one or more sidewalls, and which are joined together at an open end of each lateral half-portion. In a further configuration, no terminal portion and corresponding perceptible receptacle may be provided for communicating electrical signals between interior and exterior of the enclosure, and communication may be provided, for example, by wireless means. Further, in connection with the above disclosure, each of the lower part of the enclosure and the upper part of the enclosure have each been shown as integral, unitary structures which may each be formed, for example, by injection moulding. However, the respective parts of each of these structures may also be formed separately and then sealably joined together by any suitable means such as adhesive or welding. The material of the enclosure is not limited to specific manufacturing materials and may advantageously be manufactured from a plastic or ceramic material. Also, in the above disclosure, it has been explained how a gas-permeable membrane may be sealingly adhered to an upper surface of a pedestal structure which defines a relief chamber; however, in other configurations a membrane may be also formed as an insert to a relief aperture or may be adhered to an inner surface of the relief chamber. Moreover, although the relief aperture has been exemplified by a circular through-hole, the relief aperture may be formed with a variety of cross-sections and shapes. Finally, although the above disclosure has been set out in relation to a generic enclosure, an embodiment of the disclosure may be implemented as an electronics module having the enclosure and a radar sensor accommodated in a space inside the enclosure, for example on a PCB implementing a radar sensor. The sensor may be fixed inside the enclosure and communicating with elements outside of the enclosure by electronic signals transmitted via the terminal portion. In such a configuration, the enclosure may be made of radar-transparent material. Accordingly, the foregoing disclosure is to be being understood purely to be exemplary and illustrative of the principles and essential features of the disclosure. Substitution or variation of materials and mechanisms among those known to one skilled in the art is contemplated without affecting the essential principles of the configurations herein disclosed and their associated effects and advantages. Accordingly, the claimed scope is to be understood as limited solely by the appended claims, taking due account of any equivalents.
17,544
11860296
DETAILED DESCRIPTION FIG.1shows a radar arrangement1as known from the prior art. The radar arrangement1has a printed circuit board2and an electronic component3and an antenna4. The electronic component3is arranged on the printed circuit board2and is used to generate a high-frequency signal. The radar arrangement1further has a line structure5, which is part of the printed circuit board2, for guiding the high-frequency signal from the electronic component3into the region of the antenna4, wherein the line structure5radiates the high-frequency signal at an open-ended radiation region6and impinges the antenna4with the radiated high-frequency signal. In the schematic representation according toFIG.1, the antenna4also comprises a horn for shaping and guiding the radiated electromagnetic waves7, but this is not necessary for implementing an antenna4. The embodiment according toFIG.1is also a schematic representation in that the dimensions of the components shown are not shown to scale. The representation is chosen in such a way that the components—this concerns, for example, the line structure5—are recognizable as such. In any case, it is important to note that, in the illustrated embodiment, the component side of the printed circuit board2, on which the electronic component3is thus located, is identical to the side on which the antenna4is implemented. It is obvious that very simple designs for the illustrated radar arrangement1can be implemented in this way, but disadvantages arise here with regard to a desired process separation, also compact designs with several antennas can only be implemented to a limited extent. InFIGS.2to9, various aspects of a radar arrangement1are shown with which various disadvantages of the implementation according toFIG.1can be avoided. Based on the embodiment according toFIG.2, it is clear that the printed circuit board2comprises four electrically conductive layers8,9,10,11extending substantially parallel to each other and separated from each other by at least three electrically insulating layers12,13,14. When it is said that the various layers run “essentially” parallel to one another, this means that parallelism is not meant here in the mathematically exact sense, but to the extent that parallelism can be implemented in a technically meaningful sense—with the usual technically unavoidable inaccuracies. Two outer layers of the printed circuit board8,11, the first electrically conductive outer layer8and the second electrically conductive outer layer11, are formed by two electrically conductive layers8,11of the at least four electrically conductive layers8,9,10,11, and the remaining at least two electrically conductive layers9,10form electrically conductive inner layers9,10of the printed circuit board2. The first electrically conductive inner layer9is adjacent to the first electrically conductive outer layer8and the second electrically conductive inner layer10is adjacent to the second electrically conductive outer layer11. The three electrically insulating layers12,13,14are all electrically insulating inner layers12,13,14of the printed circuit board2. The electronic component3is arranged on the first outer layer8of the printed circuit board2, i.e. the component side, and the antenna4is formed in the second outer layer11of the printed circuit board, i.e. the antenna side. Thus, the high-frequency signal generated by the electronic component3is transmitted to the antenna formed in the second outer layer11of the printed circuit board2through the region of the electrically conductive and electrically insulating inner layers9,10,12,13,14of the printed circuit board2. The illustration inFIG.2is very schematic. As will become apparent later, on the basis of indicated layer thicknesses, the electronic component3had to be shown considerably larger, namely approximately in the order of magnitude of the total layer thickness of the printed circuit board2shown here. The illustration inFIG.2is also schematic in that it does not explicitly show how the antenna4is formed in the second electrically conductive outer layer11and how the electrically conductive inner layers9,10of the printed circuit board2are structured so that the high-frequency signal generated by the electronic component3can pass from the first electrically conductive outer layer8to that in the second electrically conductive outer layer11or to the antenna4formed therein. Of course, this would not be possible if the electrically conductive inner layers9,10had no interruptions. This also applies to the illustrations inFIGS.3to4; here, too, the cuts have not been made in the areas where the interruptions required in the electrically conductive inner layers9,10are implemented. It is important inFIGS.2to4that the structure of the printed circuit board2is clear. In the illustrated embodiments of the radar arrangement1inFIGS.2to9, the electrically insulating layers12,13,14have a thickness of about 100 μm and are made of a high-frequency substrate having a low attenuation for electromagnetic waves at frequencies of the high-frequency signal generated by the electronic component3. The electrically conductive layers8,9,10,11in the embodiment according toFIG.2have a uniform thickness of about 18 μm, thus also the electrically conductive outer layers8,11. In contrast, the electrically conductive outer layers8,11in the embodiment according toFIG.3have a layer thickness of about 43 μm, they are also formed from copper. The greater layer thickness of the electrically conductive outer layers8,11has been achieved by electroplating copper onto an initially existing conductive copper layer with the thickness of the electrically conductive inner layers9,10until the said layer thickness of the electrically conductive outer layers8,11has been achieved. The ratios of the layer thicknesses of the various electrically conductive layers8,9,10,11and the electrically insulating layers12,13,14are shown approximately correctly inFIGS.2to5. In the embodiments of the radar arrangement1according toFIGS.4and5, a stiffening layer15is fixed to the second electrically conductive outer layer11, wherein the stiffening layer15is bonded to the second electrically conductive outer layer11by means of a bonding layer24. The bonding layer24here has a thickness of about 30 μm. The stiffening layer15consists of a metallized non-metal, the non-metal being a composite material, in this case a composite material of glass fiber fabric and epoxy resin, namely FR-4. The stiffening layer15is provided on both sides with a metallization18. Here, the stiffening layer15has a thickness of 0.7 mm. Since the bonding layer24is electrically conductive in the present case, the second electrically conductive outer layer11, the bonding layer24and the metallization18form an electrically conductive unit. Even if the bonding layer24is not electrically conductive, the capacitor then formed from the second electrically conductive outer layer11, the electrically insulating bonding layer24and the metallization18forms a short circuit, electrically speaking, during operation due to the high frequencies of the electromagnetic radiation that is common in the radar range. InFIG.5, a recess16is formed in the stiffening layer15in the region of the antenna4formed in the second conductive outer layer11, thereby forming a boundary edge17. The boundary edge17of the recess15is provided with a metallization18and is therefore metallized. The metallized boundary edge17thus influences the directivity of the radar arrangement1. InFIG.5, how the antenna4is implemented in the second electrically conductive outer layer11is also shown for the first time, namely by providing the second electrically conductive outer layer11with corresponding interruptions, whereby the antenna4is exposed in the second electrically conductive outer layer11. InFIGS.6and7, two different embodiments with specific geometries are shown, indicating how radar arrangements1can be advantageously implemented. Only the electrically conductive layers8,9,10,11are shown in each case, wherein the first electrically conductive outer layer8, the first electrically conductive inner layer9, the second electrically conductive inner layer10and the second electrically conductive outer layer11are shown from top to bottom. The configurations according toFIGS.6and7have in common that the radiation region6of the line structure5for guiding the high-frequency signal, a fine aperture19for the defined passage of the electromagnetic radiation7radiated by the radiation region6of the line structure5, and the antenna4in the second electrically conductive outer layer11are implemented in three of the electrically conductive layers8,9,10,11as seen in the direction of the surface normals of the electrically conductive layers8,9,10,11from the first electrically conductive outer layer8to the second electrically conductive outer layer11. Thus, the embodiments inFIGS.6and7each implement an aperture-coupled patch antenna. In the embodiment according toFIG.6, the radiation region6of the line structure5for guiding the high-frequency signal is implemented in the first electrically conductive outer layer8(FIG.6a), the fine aperture19is implemented in the first electrically conductive inner layer9(FIG.6b), and a coarse aperture20is implemented in the second electrically conductive inner layer10(FIG.6c). The coarse aperture20is larger than the fine aperture19and is used for the unobstructed passage of the electromagnetic radiation7passing through the fine aperture19to the antenna4formed in the second electrically conductive outer layer11(FIG.6d). In the embodiment according toFIG.7, on the other hand, the radiation region6of the line structure5for guiding the high-frequency signal is implemented in the first electrically conductive inner layer9and the fine aperture19is implemented in the second electrically conductive inner layer10. The first electrically conductive outer layer8is formed as a metallic shield in the radiation region6of the line structure5for guiding the high-frequency signal. Since the line structure5is designed here as a strip line, the configuration according toFIG.7implements a symmetrical strip line due to the shielding on both sides by the first electrically conductive outer layer8and the second electrically conductive inner layer10with the exception of the fine aperture19, FIG.8shows a special feature of the radar arrangements according toFIGS.6,7and9, namely an electrical through-connection21between the four electrically conductive layers8,9,10,11, which of course also extends through the three electrically insulating layers12,13,14. The through-connection electrically connects all electrically conductive layers8,9,10,11. The electrical through-connection21here is a hole with a metallized inner wall. Such through-connections21are implemented in a plurality in the embodiments according toFIGS.6,7and9. By means of this plurality of electrical through-connections21between the four electrically conductive layers8,9,10,11, a grid-like electromagnetic shielding22is implemented, namely around the radiation region6of the line structure5for guiding the high-frequency signal, around the fine aperture19for defined passage of the electromagnetic radiation7radiated by the radiation region6of the line structure5and around the antenna4in the second electrically conductive outer layer11and, as far as applicable (embodiment according toFIG.6), around the coarse aperture20implemented in the second electrically conductive inner layer10, so that the resulting structure of grid-like electromagnetic shielding22, radiation region6of the line structure5, fine aperture19and antenna4in the second electrically conductive outer layer11forms a unit cell23. Accordingly, inFIGS.6and7, the structure of a grid-like electromagnetic shielding22and also of a unit cell23is also shown in each case in layers. The grid-like electromagnetic shielding22and thus the unit cell23have a hexagonal cross-section as viewed in the direction of the surface normals of the electrically conductive layers8,9,10,11. Finally,FIG.9shows the implementation of several unit cells23within a radar arrangement1and thus the implementation of a patch antenna array with which—provided that the various unit cells are appropriately controlled—the directivity of the radar arrangement1can also be varied. The point of view here is from above on the first electrically conductive outer layer, in which the line structure5with its radiation region6is implemented. The fine aperture19in the underlying first electrically conductive inner layer is still indicated. The dimensions of the grid-like electromagnetic shielding22and thus of the unit cell23are selected in such a way that several unit cells23can be arranged on a hexagonal grid with a spacing of a whole wavelength of the radiated high-frequency signal.
12,931
11860297
DETAILED DESCRIPTION The following description is merely exemplary in nature and is not intended to limit the present disclosure, its application or uses. Throughout the drawings, corresponding reference numerals indicate like or corresponding parts and features. In accordance with one or more exemplary embodiments, methods and systems for high angular resolution and unambiguous angle estimation in a radar device are described herein. Exemplary embodiments may include minimal numbers of different RF chip antenna integrated radar devices. For example, at least two of a single type of RF chip antenna integrated packaging may be arranged to establish a virtual antenna array capable of high angular resolution and unambiguous angle estimation. FIG.1shows an embodiment of a vehicle10, which includes a vehicle body12defining, at least in part, an occupant compartment14. The vehicle10, while shown inFIG.1as an automobile, may be any truck, aircraft, construction equipment, farm equipment, factory equipment, etc. whether user or autonomously operated. Thus, the vehicle and the vehicle body12are not limiting. The vehicle body12may support various vehicle subsystems including a powertrain16including an electric drive unit or internal combustion engine, and other subsystems to support functions of the powertrain16and other vehicle components, such as a braking subsystem, a steering subsystem, a fuel injection subsystem, an exhaust subsystem and others. The vehicle10may include a detection system20for detecting objects/obstacles, tracking objects, and avoiding obstacles, which may be used to alert a user, perform avoidance maneuvers, assist with user control, and/or assist with autonomously controlling the vehicle10. The detection system20may include one or more radar devices22. The vehicle10may incorporate a plurality of radar devices22disposed at various locations of the vehicle body12and having various angular directions, as shown inFIG.1. An embodiment of the detection system20is configured to estimate angular position of an object. An object may be any feature or condition that reflects transmitted radar signals, such as other vehicles, people, road signs, trees, road features, road obstructions, and others. Each radar device22may include transmit and receive functions which may be carried out by separate transmit and receive antenna arrays in a Multiple-Input Multiple-Output (MIMO) arrangement. Each radar device22may include components and features, such as transmit and receive antenna arrays, corresponding transmit and receive radar front end, and feedlines coupling the antennas to the radar front end. Radar front end is understood to include RF radar functions and other functions carried out primarily in the analog domain including transmit channel signal generation and transmission and receive channel conditioning. RF radar functions may include digitization of analog signals (e.g., analog-to digital (A/D) and digital-to-analog (D/A) conversions). Each radar device may further include radar backend which is understood to include digital domain radar functions including digital signal processing (DSP) of digitized reflected radar signals. Radar backend functions may include digitization of analog signals (e.g., analog-to digital (A/D) and digital-to-analog (D/A) conversions). Further, each radar device22via a controller (e.g., microcontroller unit) or other processing device (e.g., combinational logic circuit) may execute one or more software or firmware programs that provide desired functionality. Radar backend processes and functions may be carried out within the radar device22or external thereto, for example via a controller or other processing device. The radar devices22may communicate with one or more processing devices, such as co-packaged processing devices in each radar device22, an on-board processor24, or a remote processor26. The remote processor26may be part of, for example, a mapping system or vehicle diagnostic system. The vehicle10may also include a user interaction system28and other components such as a global positioning system (GPS) device. FIG.2illustrates an embodiment of a computer system30that is in communication with or is part of the detection system20, and that may perform various aspects of embodiments described herein. The computer system30includes at least one processing device32, which generally includes one or more processors for performing functions of radar detection and analysis described herein. The processing device32may be integrated into the vehicle10, for example, as the on-board processor24, or may be a processing device separate from the vehicle10, such as a server, a personal computer or a mobile device (e.g., a smartphone or tablet). The processing device32may also be co-packaged within the radar device22or incorporated into a system-on-chip radar device22which may also include the antenna arrays, radar front end, and feedlines. The processing device32may be configured to perform radar detection and analysis methods and radar backend processes such as DSP including digital beam forming described herein, among other functions. Components of the computer system30include the processing device32(such as one or more processors, processing units or digital signal processors) and a system memory34. The system memory34may include a variety of computer system readable media. Such media may be any available media that is accessible by the processing device32, and includes both volatile and non-volatile media, removable and non-removable media. For example, the system memory34may include a non-volatile memory36and may also include a volatile memory38. The computer system30may further include other removable/non-removable, volatile/non-volatile computer system/readable storage media. A computer system/readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. The system memory34may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out functions of the embodiments described herein. For example, the system memory34stores various program modules40that generally carry out the functions and/or methodologies of embodiments described herein. For example, a receiver module42may be included to perform functions related to acquiring and processing received signals, and an analysis module44may be included to perform functions related to position estimation and range finding. The system memory34may also store various data structures46, such as data files or other structures that store data related to radar detection and analysis. Examples of such data include sampled return radar signals, radar impulse response, the array beam pattern, frequency data, range-Doppler plots, range maps, and object position, velocity and/or azimuth data. As used herein, the term “module” refers to processing circuitry that may include an application specific integrated circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and memory that executes one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality. The processing device32may also communicate with one or more external devices48such as a keyboard, a pointing device, and/or any devices (e.g., network card, modem, etc.) that enable the processing device32to communicate with one or more other computing devices. In addition, the processing device32may communicate with one or more devices that may be used in conjunction with the detection system20, such as a GPS device50and a camera52. The GPS device50and the camera52may be used, for example, in combination with the detection system20for autonomous control of the vehicle10. Communication with various devices may occur via Input/Output (I/O) interfaces54. The processing device32may also communicate with one or more networks56such as a local area network (LAN), a general wide area network (WAN), and/or a public network (e.g., the Internet) via a network adapter58. It should be understood that although not shown, other hardware and/or software components could be used in conjunction with the computer system30. Examples include, but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, and data archival storage systems, etc. In accordance with the present disclosure, a radar device includes substantially identical transceiver sets of MIMO transmit (TX) chips and receive (RX) chips. A radar device includes a plurality of such substantially identical transceiver sets. Within each transceiver set, the number of TX chips may be greater than the number of RX chips, the number of TX chips may be less than the number of RX chips, or the number of TX chips may equal the number of RX chips. All transceiver sets have the same number of TX chips. All transceiver sets have the same number of RX chips. All transceiver sets have the same spatial layout among the TX chips and RX chips. As used herein, chip may refer generally to a semiconductor die including electric circuit elements including antennas, components, conductors, films, etc. formed on electronic-grade silicon (EGS) or other semiconductor (such as GaAs). Chip may also refer to one or more packaged dies (including pinout connections) for printed circuit board (PCB) mounting. A TX chip may at least include a respective TX sub-array of antennas, connecting traces and radar front end structures and functions for transmit channels. Similarly, a RX chip may at least include a respective RX sub-array of antennas, connecting traces and radar front end structure and functions for receive channels. All TX sub-arrays have equivalent numbers of TX antennas and substantially identical spatial layouts of the TX antennas. Similarly, all RX sub-arrays have equivalent numbers of RX antennas and substantially identical spatial layouts of the RX antennas. For purposes of this disclosure, arrays and sub-arrays refer to linear arrays and sub-arrays. TX and RX chips may additionally include radar backend structures and functions including DSP and MCU(s). In one embodiment and with reference toFIG.3, a radar device300includes a pair of transceiver sets301A and301B of TX and RX chips303and305. A first transceiver set of TX and RX chips301A and a second transceiver set of TX and RX chips301B are substantially identical. In the present embodiment, each transceiver set includes one TX chip303and one RX chip305. The TX chips303include a TX radar front end and feedlines (not detailed) and a TX sub-array of antennas307. The TX sub-array of antennas307includes a plurality (N) of such antennas307. The RX chips305include a RX radar front end and feedlines (not detailed) and a RX sub-array of antennas309. The RX sub-array of antennas309includes a plurality (M) of such antennas309. In the present example N=3 and M=4. In one embodiment, the TX and RX chips (e.g.,303and305) may be fabricated as separate dies and individually packaged into separate chips (including pinouts) and then co-packaged into an integrated radar device, such as by mounting on a common PCB in accordance with the desired spatial layout among the TX and RX chips (e.g.,303and305) and transceiver sets (e.g.,301A and301B). In another embodiment, the TX and RX chips (e.g.,303and305) may be fabricated as separate dies but packaged into a single transceiver chip (including pinouts) in accordance with the desired spatial layout between the TX and RX chips (e.g.,303and305) and mounted on a PCB in accordance with the desired spatial layout between the transceiver sets (e.g.,301A and301B). In another embodiment, the TX and RX chips (e.g.,303and305) may be fabricated on a common die in accordance with the desired spatial layout between the TX and RX chips (e.g.,303and305) and included in a single transceiver chip (i.e., a transceiver set)(including pinouts) and mounted on a PCB in accordance with the desired spatial layout between the transceiver sets (301A and301B). In another embodiment, the TX and RX chips (e.g.,303and305) of multiple transceiver sets (e.g.,301A and301B) may be fabricated on a common die in accordance with the desired spatial layout among the TX and RX chips (e.g.,303and305) and the transceiver sets (e.g.,301A and301B) and included in a single integrated chip (including pinouts) including all TX chips and RX chips defining all transceiver sets and mounted on a PCB. Other packaging embodiments are possible and may be apparent to one having ordinary skill in the art. Thus, it is envisioned that the TX and RX chips may be discrete components that are co-packaged into an integrated radar device, may be fabricated as part of a complete or partial system-on-chip radar device, or may be incorporated at various other levels of integration as may be required for differing end use applications. As described, the first transceiver set of TX and RX chips301A and the second transceiver set of TX and RX chips301B are substantially identical. Thus, all TX chips303have substantially identical spatial layouts and all RX chips305have substantially identical spatial layouts. In the embodiment illustrated inFIG.3, the TX sub-array antennas307are spaced by a distance D1 (TX antenna spacing). The RX sub-array antennas309are spaced by a distance D2 (RX antenna spacing). The TX chip303and the RX chip305are separated by a TX chip to RX chip distance D3. The first transceiver set of TX and RX chips301A and the second transceiver set of TX and RX chips301B are offset or spaced by a distance D4 (transceiver set spacing). The RX antenna spacing distance D2 may be established to a value greater than K*λ, where λ is the radar operating wavelength. Radar operating wavelength may include one or both of a transmit wavelength and a receive wavelength. In one embodiment, K is at least 1. In another embodiment, K is an integer. In another embodiment, K is an integer greater than 1. Antenna spacing at or above the RX antenna spacing distance D2 are for the purposes of this disclosure referred to as widely spaced, whereas antenna spacing below the RX antenna spacing distance D2 are for the purposes of this disclosure referred to as narrowly spaced. The TX antenna spacing distance D1 may be established equal to M*D2 where M is the number of RX sub-array antennas309on each RX chip305. The transceiver set spacing distance D4 may be established to less than N*D1 where N is the number of TX sub-array antennas307on each TX chip303. The TX chip to RX chip distance D3 may be established arbitrarily though is substantially identical between the first transceiver set of TX and RX chips301A and the second transceiver set of TX and RX chips301B. By establishing the TX antenna spacing distance D1 and the RX antenna spacing distance D2 as described, each transceiver set of TX and RX chips establishes a respective N*M virtual array of antennas substantially uniformly spaced by the RX antenna spacing distance D2. By establishing the transceiver set spacing distance D4 as described, each respective virtual array is offset from the other and overlaps the other by some amount, whereby antennas from the respective virtual arrays alternate within the overlapped region. Thus, while the virtual arrays overlap, the individual array antennas do not overlap but are in spaced adjacency. Preferably, the separation between the adjacent, alternating antennas is substantially uniform and substantially one-half the RX antenna spacing distance D2. In combination, the respective virtual arrays established by the first transceiver set of TX and RX chips301A and the second transceiver set of TX and RX chips301B together establish a combined virtual array that spans a wider aperture than the individual respective virtual arrays from each of the transceiver sets of TX and RX chips thus providing higher angular resolution. Significantly, by establishing the transceiver set spacing distance D4 less than N*D1 such that the overlapped antennas alternate with tighter spacing than the RX antenna spacing distance D2 the overlapped region may provide less angular ambiguity. As illustrated inFIG.3, the first transceiver set of TX and RX chips301A establishes a respective first N*M virtual array of antennas311A (cross-hatch filled virtual antennas307V) substantially uniformly spaced by the RX antenna spacing distance D2. In the present exemplary embodiment where N=3 and M=4 the virtual array311A has 12 virtual antennas307V. Similarly, the second transceiver set of TX and RX chips301B establishes a respective second N*M virtual array of antennas311B (solid filled virtual antennas307V) substantially uniformly spaced by the RX antenna spacing distance D2. In the present exemplary embodiment where N=3 and M=4 the virtual array311B has 12 virtual antennas307V. The combined virtual array315includes both the first virtual array311A and the second virtual array311B and extends to extreme outer ends of the respective first and second virtual arrays311A and311B. The first virtual array311A and the second virtual array311B overlap in overlap region317where the virtual antennas from the respective virtual arrays alternate. FIG.4illustrates the combined virtual array315and its use in an operational process flow to evaluate reflected radar signals with high angular resolution and low angular ambiguity. The combined virtual array315is a universal set of all virtual antennas from all the transceiver sets of TX and RX chips used in a radar device configured in accordance with the present disclosure. In the present embodiment utilizing two such transceiver sets of TX and RX chips, the combined virtual array includes all virtual antennas (cross-hatch filled virtual antennas307V) established by the first transceiver set of TX and RX chips301A and all virtual antennas (solid filled virtual antennas307V) established by the second transceiver set of TX and RX chips301B. A first virtual sub-array321includes virtual antennas307V extending to the extreme outer regions of the combined virtual array315and at least a portion of the virtual antennas307V in the overlap region317. The adjacent virtual antennas307V outside the overlap region317are widely spaced (i.e., equal to the RX antenna spacing distance D2) whereas the adjacent virtual antennas within the overlap region317are narrowly spaced (i.e., less than the RX antenna spacing distance D2). The virtual antennas307V within the overlap region317that are included with the first virtual sub-array321are also preferably widely spaced (i.e., greater than or equal to the RX antenna spacing distance D2). Therefore, the virtual antennas307V within the overlap region317that are included with the first virtual sub-array321are not adjacent ones of the virtual antennas. Thus, the virtual antennas307V that make up the first virtual sub-array321provide a wide aperture with widely spaced virtual antennas307V characterized by high angular resolution but high angular ambiguity. A second virtual sub-array323only includes the virtual antennas307V in the overlap region317. The adjacent virtual antennas within the overlap region317are narrowly spaced (i.e., less than the RX antenna spacing distance D2). Thus, the virtual antennas307V that make up the second sub-array321provide a narrow aperture of narrowly spaced virtual antennas307V characterized by low angular ambiguity but low angular resolution. As used herein, high and low angular resolutions are relative terms referring to the angular resolution of one virtual sub-array compared to the angular resolution of the other virtual sub-array. Likewise, as used herein, high and low angular ambiguity are relative terms referring to the angular ambiguity of one virtual sub-array compared to the angular ambiguity of the other virtual sub-array. In one embodiment, radar backend processing may selectively partition the combined virtual array315into the first virtual sub-array321and the second virtual sub-array323as described. A first beam forming operation may be performed using the first sub-array321to evaluate the angle of arrival of reflected radar signals at331. Generally, the beam forming operation receives reflected radar signals from each of the virtual antennas of the first sub-array321and coherently combines them for each angle of arrival. An exemplary two-dimensional plot of a reflected radar signal corresponding to a single centrally located (0 degree azimuth angle) target from the first beam forming operation upon the first sub-array321is shown at333. The resulting beam forming spectrum320from the first sub-array321is represented graphically as a plot of combined intensity amplitude (relative power) along a vertical axis [dB] vs. azimuth angle (angle of arrival) along a horizontal axis [deg]. In the example target detection plot, the main lobe324is of high angular resolution as is characteristic of the widely spaced virtual antennas and corresponds to a true target angle at 0 degrees. Grating lobes322and326, also characteristic of widely spaced virtual antennas, also appear at substantially similar amplitudes as the main lobe324but are not readily distinguishable over the true target main lobe thereby introducing ambiguity into the estimation of the true angle of arrival of the reflected radar signal and target location. A second beam forming operation may be performed using the second virtual sub-array323to evaluate the angle of arrival of reflected radar signals at335. Generally, the beam forming operation receives reflected radar signals from each of the virtual antennas of the second virtual sub-array323and coherently combines them for each angle of arrival. An exemplary two-dimensional plot of a reflected radar signal corresponding to a single centrally located (0 degree azimuth angle) target from the second beam forming operation upon the second virtual sub-array323is shown at337. The resulting beam forming spectrum329from the second virtual sub-array323is represented graphically as a plot of combined intensity amplitude (relative power) along a vertical axis [dB] vs. azimuth angle (angle of arrival) along a horizontal axis [deg]. In the example target detection plot the main lobe328corresponds to a true target angle at 0 degrees. The main lobe328is of low angular resolution as is characteristic of the narrowly spaced virtual antennas but is not ambiguous as validated by the absence of any angularly proximate grating lobes of comparable amplitude. The beam forming operations applied to the reflected radar signals from each of the first sub-array321and the second virtual sub-array323may be any suitable variety. One exemplary method of beam forming includes Bartlett beam forming. Other beam forming methods may be employed including, by way of non-limiting examples, MVDR (Capon), MUSIC, SAMV, Linear Prediction and Machine Learning (e.g., DNN estimation). At339, lobe matches between the beam forming spectrum320and the beam forming spectrum329determine the true angle of arrival or target angle. This may be accomplished, for example, through comparisons and angular matches of spectral peaks from the beam forming spectrums320and329, or through filtering of the high angular resolution beam forming spectrum320in view of the low angular resolution beam forming spectrum329. The true angle of arrival or target angle corresponds to the main lobe324of the high angular resolution beam forming spectrum320that matches the angle of the main lobe328of the low angular resolution beam forming spectrum329. FIG.5illustrates an alternate embodiment of a radar device500in accordance with the present disclosure. Radar device500includes three substantially identical transceiver sets of MIMO transmit (TX) and receive (RX) chips. In one embodiment, a first transceiver set of TX and RX chips501A, a second transceiver set of TX and RX chips501B, and a third transceiver chip501C are substantially identical. Each transceiver set includes one TX chip503and one RX chip505. The TX chips503include a TX radar front end and feedlines (not detailed) and a TX sub-array of antennas507. The TX sub-array of antennas507includes a plurality (N) of such antennas507. The RX chips505include a RX radar front end and feedlines (not detailed) and a RX sub-array of antennas509. The RX sub-array of antennas509includes a plurality (M) of such antennas509. In the present example N=3 and M=4. The various chip fabrications, packaging and integrations described with reference toFIG.3are equally applicable with respect to the embodiment shown inFIG.5. Thus, it is envisioned that the TX and RX chips503,505may be discrete components that are co-packaged into an integrated radar device, may be fabricated as part of a complete or partial system-on-chip radar device, or may be incorporated at various other levels of integration as may be required for differing end use applications. As described, the first transceiver set of TX and RX chips501A, the second transceiver set of TX and RX chips501B, and the third transceiver chip501C are substantially identical. Thus, all TX chips503have substantially identical spatial layouts and all RX chips505have substantially identical spatial layouts. In the embodiment illustrated inFIG.5, the TX sub-array antennas507are spaced by a distance D1 (TX antenna spacing). The RX sub-array antennas509are spaced by a distance D2 (RX antenna spacing). The TX chip503and the RX chip505are separated by a TX chip to RX chip distance D3. The first transceiver set of TX and RX chips501A and the second transceiver set of TX and RX chips501B are offset or spaced by a distance D4 (transceiver set spacing), and the second transceiver set of TX and RX chips501B and the third transceiver set of TX and RX chips501C are also spaced by the same transceiver set spacing distance D4. The RX antenna spacing distance D2 may be established to a value greater than K*λ, where λ is the radar operating wavelength. Radar operating wavelength may include one or both of a transmit wavelength and a receive wavelength. In one embodiment, K is at least 1. In another embodiment, K is an integer. In another embodiment, K is an integer greater than 1. Antenna spacing at or above the RX antenna spacing distance D2 are for the purposes of this disclosure referred to as widely spaced, whereas antenna spacing below the RX antenna spacing distance D2 are for the purposes of this disclosure referred to as narrowly spaced. The TX antenna spacing distance D1 may be established equal to M*D2 where M is the number of RX sub-array antennas509on each RX chip505. The transceiver set spacing distance D4 may be established to less than N*D1 where N is the number of TX sub-array antennas507on each TX chip503. The TX chip to RX chip distance D3 may be established arbitrarily though is substantially identical among the first transceiver set of TX and RX chips501A, the second transceiver set of TX and RX chips501B, and the third transceiver set of TX and RX chips501C. By establishing the TX antenna spacing distance D1 and the RX antenna spacing distance D2 as described, each transceiver set of TX and RX chips establishes a respective N*M virtual array of antennas substantially uniformly spaced by the RX antenna spacing distance D2. By establishing the transceiver set spacing distance D4 as described, each respective virtual array is offset from the other and overlaps the other by some amount, whereby antennas from the respective virtual arrays alternate within the overlapped region. Thus, while the virtual arrays overlap, the individual array antennas do not overlap but are in spaced adjacency. Preferably, the separation between the adjacent, alternating antennas is substantially uniform and substantially one-half the RX antenna spacing distance D2. In combination, the respective virtual arrays established by the first transceiver set of TX and RX chips501A, the second transceiver set of TX and RX chips501B, and the third transceiver set of TX and RX chips501C together establish a combined virtual array that spans a wider aperture than the individual respective virtual arrays from each of the transceiver sets of TX and RX chips thus providing higher angular resolution. Significantly, by establishing the transceiver set spacing distance D4 less than N*D1 such that the overlapped antennas alternate with tighter spacing than the RX antenna spacing distance D2 the overlapped region may provide less angular ambiguity. As illustrated inFIG.5, the first transceiver set of TX and RX chips501A establishes a respective first N*M virtual array of antennas511A (cross-hatch filled virtual antennas507V) substantially uniformly spaced by the RX antenna spacing distance D2. In the present exemplary embodiment where N=3 and M=4 the virtual array511A has 12 virtual antennas507V. Similarly, the second transceiver set of TX and RX chips501B establishes a respective second N*M virtual array of antennas511B (solid filled virtual antennas507V) substantially uniformly spaced by the RX antenna spacing distance D2. In the present exemplary embodiment where N=3 and M=4 the virtual array511B also has 12 virtual antennas507V. And, the third transceiver set of TX and RX chips501C establishes a respective second N*M virtual array of antennas511C (null filled virtual antennas507V) substantially uniformly spaced by the RX antenna spacing distance D2. In the present exemplary embodiment where N=3 and M=4 the virtual array511B also has 12 virtual antennas507V. A combined virtual array515is a universal set of all virtual arrays511A,511B and511C. The combined virtual array515includes the first virtual array511A, the second virtual array511B, and the third virtual array511C, and extends to extreme regions of the respective first and third virtual arrays511A and511C. The first virtual array511A and the second virtual array511B overlap in overlap region517where the virtual antennas from the respective virtual arrays alternate. Similarly, the second virtual array511B and the third virtual array511C overlap in overlap region517where the virtual antennas from the respective virtual arrays alternate. In one embodiment, radar backend processing may selectively partition the combined virtual array515into a first virtual sub-array including all virtual antennas507V from the first virtual array511A and all virtual antennas507V from the third virtual array511C thus extending to the extreme outer regions of the combined virtual array515. A first beam forming operation may be performed using this first virtual sub-array to evaluate the angle of arrival of reflected radar signals as described with respect toFIG.4. The first beam forming operation performed using this first virtual sub-array results in high angular resolution as is characteristic of the widely spaced virtual antennas. A second beam forming operation may be performed using a second virtual sub-array including all virtual antennas507V from the second virtual array511B and those virtual antennas507V from the first and third virtual arrays511A and511C that overlap the virtual antennas507V from the second virtual array511B. The second beam forming operation performed using this second virtual sub-array results in lower angular resolution but less angular ambiguity as is characteristic of the narrowly spaced virtual antennas. The beam forming operations applied to the reflected radar signals from each of the first and second virtual sub-arrays may be any suitable variety as described with respect toFIG.4. Overall, the first and second virtual sub-arrays may be processed as described with respect toFIG.4to match lobes between respective beam forming spectrums to determine the true angle of arrival or target angle. Although it may be an objective for the transceiver sets to be equivalent and certain features to be uniform, certain tolerances affecting such objective may be difficult to achieve in practice. Whereas identical numbers of TX and RX chips across all transceiver sets and identical numbers of TX antennas and RX antennas within respective TX and RX chips are readily attainable, one skilled in the art understands that absolute spatial identity or symmetry is approximate and may vary with design, manufacturing, fabrication, assembly processes and levels of integration. As such, variations in the TX antenna spacings, the RX antenna spacing, the TX chip to RX chip spacing, and the transceiver spacing considered within the tolerable range of those skilled in the art are understood to be inherent and within the meaning of the phrases “substantially identical”, “substantially uniform” and “substantially uniformly” as used herein. Embodiments herein may be a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the embodiments herein. The computer readable storage medium may be a tangible device that may retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein may be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may include copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the embodiments herein may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the embodiments herein. Aspects of the embodiments herein are described herein with reference to process flow illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments herein. It will be understood that each block of process flow illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, may be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that may direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein includes an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The process flow and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which includes one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, may be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one more other features, integers, steps, operations, element components, and/or groups thereof. While the above disclosure has been described with reference to exemplary embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from its scope. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the disclosure without departing from the essential scope thereof. Therefore, it is intended that the present disclosure is not limited to the particular embodiments disclosed but will include all embodiments falling within the scope thereof.
41,496
11860298
DETAILED DESCRIPTION The figures included herein, and the various embodiments used to describe the principles of the present disclosure are by way of illustration only and should not be construed in any way to limit the scope of the disclosure. Those skilled in the art will understand that the principles of the present disclosure may be implemented in any suitably arranged in a wired or wireless communication system. FIG.1illustrates an electronic device according to various embodiments of the present disclosure. The embodiment of the electronic device100shown inFIG.1is for illustration only. Other embodiments can be used without departing from the scope of the present disclosure. As shown inFIG.1, the electronic device100includes a radio frequency (RF) transceiver110, transmit (TX) processing circuitry115, a microphone120, receive (RX) processing circuitry125, a speaker130, a processor140, an input/output (I/O) interface (IF)145, a memory160, a display165, an input170, and sensors175. Non-limiting examples of sensors175include inertial sensors, proximity sensors, infrared sensors, ultrasonic sensors, laser sensors, and capacitive sensors that can provide contextual operational data usable for opportunistically updating a leakage response. The memory160includes an operating system (OS)162and one or more applications164. The one or more applications164can be Type 1 applications or Type 2 applications that can be used to provide additional contextual operational data also usable for opportunistically updating a leakage response. The transceiver110transmits signals to other components in a system and receives incoming signals transmitted by other components in the system. For example, the transceiver110transmits and receives RF signals, such as BLUETOOTH or WI-FI signals, to and from an access point (such as a base station, WI-FI router, BLUETOOTH device) of a network (such as a WI-FI, BLUETOOTH, cellular, 5G, LTE, LTE-A, WiMAX, or any other type of wireless network). The received signal is processed by the RX processing circuitry125. The RX processing circuitry125may transmit the processed signal to the speaker130(such as for voice data) or to the processor140for further processing (such as for web browsing data). The TX processing circuitry115receives voice data from the microphone120or other outgoing data from the processor140. The outgoing data can include web data, e-mail, or interactive video game data. The TX processing circuitry115processes the outgoing data to generate a processed signal. The transceiver110receives the outgoing processed signal from the TX processing circuitry115and converts the received signal to an RF signal that is transmitted via an antenna. In other embodiments, the transceiver110can transmit and receive radar signals to detect the potential presence of an object in the surrounding environment of the electronic device100. In this embodiment, one of the one or more transceivers in the transceiver110includes is a radar transceiver150configured to transmit and receive signals for detection and ranging purposes. For example, the radar transceiver150may be any type of transceiver including, but not limited to a WiFi transceiver, for example, an 802.11ay transceiver. The radar transceiver150includes antenna array(s)155that includes transmitter157and receiver159antenna arrays. In some embodiments, the signals transmitted by the radar transceiver150can include, but are not limited to, millimeter wave (mmWave) signals. The radar transceiver150can receive the signals, which were originally transmitted from the radar transceiver150, after the signals have bounced or reflected off of target objects in the surrounding environment of the electronic device100. The processor140can analyze the time difference between when the signals are transmitted by the radar transceiver150and received by the radar transceiver150to measure the distance of the target objects from the electronic device100. The transmitter157and the receiver159can be fixed in close proximity to each other such that the distance of separation between them is small. For example, the transmitter157and the receiver159can be located within a few centimeters of each other. In some embodiments, the transmitter157and the receiver159can be co-located in a manner that the distance of separation is indistinguishable. Based on context information available from other applications executing on the electronic device100, the processor140execute instructions to cause the electronic device to opportunistically update leakage measurements for the transmitter157and the receiver159usable to cancel a leakage signal that is transmitted from the transmitter157to the receiver159. The leakage measurements can be represented by a CIR as described in more detail inFIG.3. The TX processing circuitry115receives analog or digital voice data from the microphone120or other outgoing baseband data (such as web data, e-mail, or interactive video game data) from the processor140. The TX processing circuitry115encodes, multiplexes, and/or digitizes the outgoing baseband data to generate a processed baseband or IF signal. The transceiver110receives the outgoing processed baseband or IF signal from the TX processing circuitry115and up-converts the baseband or IF signal to an RF signal that is transmitted via the antenna105. The processor140is also capable of executing the operating system162in the memory160in order to control the overall operation of the electronic device100. For example, the processor140can move data into or out of the memory160as required by an executing process. In some embodiments, the processor140is configured to execute the applications164based on the OS program162or in response to signals received from external devices or an operator. In some embodiments, the memory160is further configured to store data, such as a leakage response for leakage cancelation, which the processor140can utilize to cause various components of the electronic device to perform leakage cancelation individually or cooperatively. In some embodiments, the processor140can control the reception of forward channel signals and the transmission of reverse channel signals by the transceiver110, the RX processing circuitry125, and the TX processing circuitry115in accordance with well-known principles. In some embodiments, the processor140includes at least one microprocessor or microcontroller. The processor140is also coupled to the I/O interface145, the display165, the input170, and the sensor175. The I/O interface145provides the electronic device100with the ability to connect to other devices such as laptop computers and handheld computers. The I/O interface145is the communication path between these accessories and the processor140. The display165can be a liquid crystal display (LCD), light-emitting diode (LED) display, organic LED (OLED), active matrix OLED (AMOLED), or other display capable of rendering text and/or graphics, such as from websites, videos, games, images, and the like. The processor140can be coupled to the input170. An operator of the electronic device100can use the input150to enter data or inputs into the electronic device100. Input150can be a keyboard, touch screen, mouse, track-ball, voice input, or any other device capable of acting as a user interface to allow a user to interact with electronic device100. For example, the input150can include voice recognition processing thereby allowing a user to input a voice command via microphone120. For another example, the input150can include a touch panel, a (digital) pen sensor, a key, or an ultrasonic input device. The touch panel can recognize, for example, a touch input in at least one scheme among a capacitive scheme, a pressure sensitive scheme, an infrared scheme, or an ultrasonic scheme. The electronic device100can further include one or more sensors175that meter a physical quantity or detect an activation state of the electronic device100and convert metered or detected information into an electrical signal. For example, sensor(s)175may include one or more buttons for touch input, one or more cameras, a gesture sensor, an eye tracking sensor, a gyroscope or gyro sensor, an air pressure sensor, a magnetic sensor or magnetometer, an acceleration sensor or accelerometer, a grip sensor, a proximity sensor, a color sensor, a bio-physical sensor, a temperature/humidity sensor, an illumination sensor, an Ultraviolet (UV) sensor, an Electromyography (EMG) sensor, an Electroencephalogram (EEG) sensor, an Electrocardiogram (ECG) sensor, an infrared (IR) sensor, an ultrasound sensor, a fingerprint sensor, and the like. The sensor(s)175can further include a control circuit for controlling at least one of the sensors included therein. In various embodiments, the electronic device100may be a phone or tablet. In other embodiments, the electronic device100may be a robot or any other electronic device using a radar transceiver.FIG.1. does not limit the present disclosure to any particular type of electronic device. FIG.2illustrates a monostatic radar according to various embodiments of the present disclosure. The embodiment of the monostatic radar200shown inFIG.2is for illustration only and other embodiments can be used without departing from the scope of the present disclosure. The monostatic radar200illustrated inFIG.2includes a processor210, a transmitter220, and a receiver230. In some embodiments, the processor210can be the processor140. In some embodiments, the transmitter220and the receiver230can be the radar transceiver150and connected to the transmitter157and receiver159antenna arrays, respectively, included in the antenna array(s)155. In various embodiments, the transmitter220and the receiver230are co-located using a common antenna or nearly co-located while separate but adjacent antennas. The monostatic radar200is assumed to be coherent such that the transmitter220and the receiver230are synchronized via a common time reference. The processor210controls the transmitter220to transmit a radar signal or radar pulse. The radar pulse is generated as a realization of a desired “radar waveform” modulated onto a radio carrier frequency and transmitted through a power amplifier and antenna (shown as a parabolic antenna), such as the transmitter220, either omni-directionally or focused into a particular direction. After the radar pulse has been transmitted, a target240at a distance R from the radar200and within a field-of-view of the transmitted pulse will be illuminated by RF power density pt(in units of W/m2) for the duration of the transmission. To the first order, ptis described by Equation 1: p⁢t=PT4⁢π⁢R2⁢GT=PT4⁢π⁢R2⁢AT(λ2/4⁢π)=PT⁢ATλ2⁢R2, where PTis a transmit power [W], GT, is a transmit antenna gain [dBi], ATis an effective aperture area [m2], X is a wavelength of the radar signal RF carrier signal [m], and R is the target distance [m]. The transmit power density impinging onto the target surface leads to reflections depending on the material composition, surface shape, and dielectric behavior at the frequency of the radar signal. Off-direction scattered signals are generally not strong enough to be received back at the receiver230, so only direct reflections contribute to a detectable, received signal. Accordingly, the illuminated area or areas of the target with normal vectors directing back to the receiver230act as transmit antenna apertures with directivities, or gains, in accordance with their effective aperture area or areas. The reflected-back power Preflis described by Equation 2: Prefl=pt⁢At⁢Gt∼pt⁢At⁢rt⁢At(λ2/4⁢π)=pt⁢RCS, where Preflis an effective (isotropic) target-reflected power [W], Atis an effective target area normal to the radar direction [m2], rtis a reflectivity of the material and shape [0, . . . , 1], Gtis a corresponding aperture gain [dBi], and RCS is a radar cross section [m2]. As depicted in Equation 2, the radar cross section (RCS) is an equivalent area that scales proportionally to the square of the actual reflecting area, is inversely proportional to the square of the wavelength, and is reduced by various shape factors and the reflectivity of the material itself. For example, for a flat, fully reflecting mirror of an area At, large compared with λ2, RCS=4πAt2/λ2. Due to the material and shape dependency, it is difficult to deduce the actual physical area of the target240based on the reflected power even if the distance R from the target to the radar200is known. The target-reflected power at the location of the receiver230is based on the reflected-power density at the reverse distance R, collected over the receiver antenna aperture area. The received, target-reflected power PRis described by Equation 3: PR=Prefl4⁢π⁢R2⁢AR=PT·RCS⁢AT⁢AR4⁢πλ2⁢R4, where PRis the received, target-reflected power [W] and ARis the receiver antenna effective aperture area [m2]. In some embodiments, ARcan be the same as AT. Such a radar system is usable as long as the receiver signal exhibits a sufficient signal-to-noise ratio (SNR). The particular value of the SNR depends on the waveform and detection method used. The SNR is described by Equation 4: SNR=PRkT·B·F, where kT is Boltzmann's constant×temperature [W/Hz], B is the radar signal bandwidth [Hz], and F is the receiver noise factor, referring to the degradation of receive signal SNR due to noise contributions to the receiver circuit itself. In some embodiments, the radar signal can be a short pulse with a duration, or width, denoted by T. In these embodiments, the delay t between the transmission and reception of the corresponding echo will be equal to τ=2R/c, where c is the speed of light propagation in the medium, such as air. In some embodiments, there can be several targets240at slightly different distances R. In these embodiments, the individual echoes of each separate target240is distinguished as such only if the delays differ by at least one pulse width, and the range resolution of the radar is described as ΔR=cΔτ/2=cTp/2. A rectangular pulse of duration Tpexhibits a power spectral density P(f)˜(sin (πfTp)/(πfTp))2with the first null at its bandwidth B=1/Tp. Therefore, the connection of the range resolution of a radar with the bandwidth of the radar waveform is described by Equation 5: ΔR=c/2B Based on the reflected signals received by the receiver230, the processor210generates a metric that measures the response of the reflected signal as a function of the distance of the target240from the radar. In some embodiments, the metric can be a CIR. FIG.3illustrates an example of a CIR depicting a measured leakage response according to various embodiments of the present disclosure. The CIR is a response metric based on the signals received by the receiver230. For example, the CIR is a measure of amplitude and/or phase of a reflected signal as a function of distance. As shown inFIG.3, the CIR is depicted with the delay tap index denoted on the x-axis, measuring the distance, and the amplitude of the radar measurement [dB] denoted on the y-axis. In a monostatic radar, for example the radar200, that has separate transmitting and receiving antenna modules, a strong signal can radiate directly from the transmitter220to the receiver230causing a strong response at the delay corresponding to the separation between the transmitter220and receiver230. The strong signal radiating from the transmitter220to the receiver230is referred to as a leakage signal. Even if the direct leakage signal from the transmitter220can be assumed to correspond to a single delay, the effect of the direct leakage signal can still impact multiple delay taps adjacent to the direct leakage signal. In the measured leakage response illustrated inFIG.3, the main leakage peak is denoted at tap11. In addition, taps10and12also have strong responses, noted by the responses being greater than 20 dB above the noise floor. Because of the additional responses such as shown at taps10and12, it is difficult to reliably detect and estimate the target range within those first few taps from the leakage taps. FIG.4illustrates a timing diagram for radar transmission according to various embodiments of the present disclosure. In particular,FIG.4illustrates a frame structure that divides time into frames that each comprises multiple bursts. Each burst includes a plurality of pulses. The timing diagram illustrated inFIG.4assumes an underlying pulse compression radar system. As illustrated inFIG.4, each frame includes a number of bursts N, illustrated as Burst1, Burst2, Burst3, up to Burst N. Each burst is formed from a plurality of pulses. For example,FIG.4illustrates that Burst1comprises a plurality of pulses referenced as Pulse1, Pulse2, etc. through Pulse M. For example, in Burst1a radar transceiver, such as the transmitter157, can transmit Pulse1, Pulse2, and Pulse M. In Burst2, the transmitter157can transmit similar pulses Pulse1, Pulse2, and Pulse M. Each different pulse (Pulse1, Pulse2, and Pulse M) and burst (Burst1, Burst2, Burst3, etc.) can utilize a different transmission/reception antenna configuration, that is the active set of antenna elements and corresponding analog/digital beamforming weights, to identify the specific pulses and bursts. For example, each pulse or burst can utilize a different active set of antenna elements and corresponding analog/digital beamforming weights to identify specific pulses and bursts. Following each frame, a processor, such as the processor140, connected to the transmitter157obtains radar measurements at the end of each frame. For example, the radar measurements can be depicted as a three-dimensional complex CIR matrix. The first dimension may correspond to the burst index, the second dimension may correspond to the pulse index, and the third dimension may correspond to the delay tap index. The delay tap index can be translated to the measurement of range or time of flight of the received signal. The leakage signal from the radar transmitter to the radar receiver can hinder target detection and range estimation abilities of radar, particularly for objects within a proximity of and within a field of view of the radar transceiver. In some exemplary embodiments, objects are within the proximity of and within the field of view of the radar transceiver when the object is less than about 20 cm from the radar transceiver. In a more particular embodiment, the objects are within the proximity of and within the field of view of the radar transceiver when the object is less than about 10 cm from the radar transceiver. Cancelation of the leakage signal can overcome this issue. Pre-measured leakage signals stored on an electronic device, such as in memory160of electronic device100, can be used to cancel the leakage signal from radar measurements. This approach is feasible because the leakage signal is propagated through a rigidly defined path determined by the device hardware, which can be assumed to be constant for a relatively long duration under similar environmental conditions. Occasional update of the stored leakage measurement can ensure the accuracy of radar-based sensing. So that resources are not continually being used to update a leakage measurement when inconvenient or unnecessary, novel aspects of the various embodiments disclosed herein are directed to opportunistically updating a stored leakage measurement when necessary and/or when possible. For example, a stored leakage measurement that was recently obtained may not need to be updated and therefore can be deemed valid. If a stored leakage measurement is no longer valid, then the stored leakage measurement can be updated only when possible. For example, update of a stored leakage measurement is not possible if an object is within a proximity to and within a field of view of the radar transceiver. Various embodiments of the present disclosure are directed to the use of context information from various applications executing on the electronic device to determine whether a stored leakage measurement is still valid, and if not, when update of the stored leakage measurement is possible. Regardless of whether the executing applications directly utilize radar measurements, successful operation of these applications is generally contingent upon the lack of objects within a proximity to and within a field of view of the radar transceiver. An exemplary application that will be explained in more detail in the figures that follow involves radar-based face authentication. In this case, for successful operation, there must be no obstacle between the radar antenna modules and the user's face, which are typically separated by a distance between 20 to 50 cm. Up-to-date leakage measurements can be extracted from the radar measurements that have yielded a desirable result (e.g., a successful authentication). The extracted leakage measurement can be used to update a leakage response of a radar transceiver in an electronic device by canceling out the leakage signal of the radar measurement. The updated leakage response can then be used for a reliable detection and accurate ranging of targets, particularly within the proximity of and within a field of view of the radar transceiver. FIG.5illustrates a flowchart of general operations for leakage cancelation according to various embodiments of the present disclosure. A processor can execute instructions to cause an electronic device, such as processor140of electronic device100inFIG.1, to undergo the operations described in flowchart500for canceling the effect of the leakage signal that is transmitted directly from a transmitter to a receiver. For example, radar measurements taken in operation502include a leakage signal that can be canceled in operation506by stored leakage measurements obtained from operation504. A stored leakage measurement is data describing signal strength of a set of leakage signals relative to a delay tap index which can be attributed to the leakage signal transmitted directly from the transmitter to the receiver of a radar transceiver. The stored leakage measurement can be represented in a CIR as shown inFIG.3. Target detection and range estimation can be achieved in operation508using the radar measurements after the leakage cancellation. The stored leakage measurements ofFIG.5can be associated with one or more state variables, such as a timestamp, temperature, or humidity describing conditions when the stored leakage measurement was obtained. Each of the state variables can be further divided into one or more categories or ranges. For example, the stored leakage measurements could be stored for each temperature category (such as high, medium, low, or the temperature could be divided into multiple bins of size N degrees each). Then, a leakage measurement update could be done for each temperature category separately. Also, when the stored measurement is used to remove the leakage for the radar detection and estimation, the temperature when the radar measurement was done can be used to select the appropriate stored leakage measurement to be used for the leakage removal. Other types of information could also be used in a similar manner. For example, the humidity is another factor that could affect the behavior of the circuitry of the device and thus can also impact the leakage behavior, and it could be used as a part of the operating environment description. FIG.6illustrates a flowchart of operations for opportunistically updating a leakage measurement according to a non-limiting embodiment of the present disclosure. A processor can execute instructions to cause an electronic device, such as processor140of electronic device100inFIG.1, to undergo the operations described in flowchart600to determine validity of a stored leakage measurement and to update the stored leakage signal if necessary and if possible. A state of the electronic device is identified in operation602. The state of the device is based on one or more state variables, examples of which can include time, temperature, and humidity. Based on the state of the device, the validity of a stored leakage measurement can be determined in operation604. The flowcharts depicted inFIGS.7-9and the related embodiments illustrate some non-limiting examples for determining the validity of stored leakage measurements based on state variables. If the stored leakage measurement is still valid, then the stored leakage measurement is not updated in operation606. Otherwise, if the stored leakage measurement is no longer valid, as determined in operation604, then a determination as to whether the stored leakage measurement can be updated is made in operation608. Flowchart600proceeds to operation610if the stored leakage measurement cannot be updated, or to operation612if the stored leakage measurement can be updated. There are different approaches for updating stored leakage measurements in operation612. For example, a simple approach is to replace the stored leakage measurement with a newly obtained leakage measurement. Another approach involves averaging, either a simple average of all past valid leakage measurements or a weighted average. In one embodiment, the weighted average can include all historical leakage measurements and in another embodiment, the weighted average spans only a certain window of time to include only a subset of historical leakage measurements. Yet another weighted average approach could use the time-stamp of the leakage measurements to determine the age of the measurements and perform averaging weighted by the freshness of the measurements (e.g., giving more weight to more recent leakage measurements). Note that if the leakage measurements are stored for different types of categories of the operating environment of the radar (e.g., defined by state variables such as temperature and/or humidity), the averaging methods described so far could be used on the measurements belonging to each operating environment category separately. FIG.7illustrates a flowchart of steps for determining validity of stored leakage measurements according to various embodiments of the present disclosure. A classifier can determine if a leakage update is needed (i.e., make the validity determination) in operation706based on a stored state variable (Slk) of a stored leakage measurement from operation702and a current state variable (Scu) of the electronic device from operation704. The stored state variable can be maintained in memory160and compared with the corresponding state variable determined by one or more sensors175and/or applications164. Based on the results of the determination made in operation706, flowchart700proceeds to operation708if the stored leakage measurement is not valid, or to operation710if the stored leakage measurement is still valid. FIG.8illustrates a flowchart for determining validity of leakage measurements with reference to time as a state variable according to various embodiments of the present disclosure. A processor can make the validity determination in operation806using the stored timestamp (tlk) of a stored leakage measurement from operation802and a current timestamp (tcu) from operation804. The stored state variable can be maintained in memory160and compared with the corresponding state variable determined by one or more applications164capable of providing a current timestamp. For example, in operation806a processor can make a determination if the difference between the stored timestamp and the current timestamp exceeds a predefined threshold value. If the difference exceeds the predefined threshold value, then the stored leakage measurement is deemed invalid in operation808or valid in operation810. FIG.9illustrates a flowchart for determining validity of leakage measurements with reference to temperature and humidity as state variables according to various embodiments of the present disclosure. A processor can make the validity determination in operation910based on a comparison of a temperature of a stored leakage measurement (Tlk) from operation902and a current temperature (Tcu) of the electronic device from operation908, and/or a comparison of a humidity of a stored leakage measurement (Hlk) from operation904and a current humidity (Hcu) of the electronic device from operation906. The stored state variable can be maintained in memory160and compared with the corresponding state variable determined by one or more sensors175capable of providing a current temperature and/or humidity. In the non-limiting embodiment depicted inFIG.9, a validity determination can be made in operation910if a difference between the current temperature (Tcu) and the stored temperature (Tlk) associated with the stored leakage measurement exceeds a temperature threshold, and/or if a difference between the current humidity (Hcu) and the stored humidity (Hlk) associated with the stored leakage measurement exceeds a humidity threshold. The flowchart900proceeds to operation912if the temperature threshold is exceeded, the humidity threshold is exceeded, or both the temperature threshold and humidity threshold are exceeded. The flowchart900proceeds to operation914if neither the temperature threshold nor the humidity threshold is exceeded. For ease of discussion, the opportunistic updating of leakage measurements can be separated into two different types of applications. The first type of application, which may be referred to herein as a Type 1 application, is an application that uses radar measurements. These radar-based applications do not necessarily require target detection as in a typical radar use case. Some examples include face authentication and gesture recognition where explicit radar detection is not required (although one can still be used). The second type of application, which may be referred to herein as a Type 2 application, does not use radar measurements. Type 2 applications can use other non-radar sensors (e.g., a camera) or no sensor at all. Operational contextual data from non-radar sensors or the application itself can be used to infer whether update of leakage measurement is possible (i.e., that the radar field-of-view is clear of objects so that a new leakage measurement can be obtained). In both Type 1 and Type 2 applications, a leakage measurement update decision is made based on the inference to determine whether an object is within a proximity to and within a field-of-view of an associated radar transceiver, which would prevent the capture of an accurate leakage measurement. FIG.10illustrates a flowchart for a leakage measurement update decision for radar-based applications according to various embodiments of the present disclosure. A processor can execute instructions to cause an electronic device, such as processor140of electronic device100inFIG.1, to undergo the operations described in flowchart1000to arrive at the leakage measurement update decision. Generally, Type 1 applications obtain and process radar measurements to generate some operational contextual data, which is application-specific, describing the state of the operation of the application. The operational context data can then be used to determine if the leakage measurement can be updated as described inFIG.10and the figures that follow. In flowchart1000, radar measurements are obtained in operation1002for Type 1 applications. The radar measurements can be obtained from radar transceiver150inFIG.1. The radar measurements include leakage signals transmitted directly from the radar transmitter157to the radar receiver159, as well as the signals returning to the receiver159from a target within a field-of-view of the radar transceiver150. Based on those radar measurements obtained in operation1002, a determination is made in operation1004as to whether the leakage measurement can be updated. If the leakage measurement can be updated, then in operation1006measurements corresponding to the leakage signal are extracted from the radar measurement. In a particular embodiment, the extraction is achieved by selecting the signal response(s) corresponding to small delay taps (e.g., in the range from between 0-20 cm, or in the range between about 0-15 cm). These small delay taps may be referred to in the alternative as “leakage taps”. Because leakage is the direct transmission between the transmitter and receiver the path length is short and thus its primary impact is at the short-range distance. For this reason, to cancel the main leakage, the radar measurements at close range or equivalently small delay indices are of particular concern. In operation1008, the stored leakage measurement can be updated with the extracted measurements corresponding to the leakage signal. If, at operation1004a determination is made that the leakage measurement cannot be updated, then the leakage signal is not updated at operation1010. FIG.11illustrates a flowchart for a leakage measurement update decision for radar-based presence detection according to various embodiments of the present disclosure. The leakage measurement update decision can be made based, at least in part, on information from a Type 1 application that employs an algorithm for processing raw radar measurements to detect the presence of an object in its vicinity. The raw radar measurements include contributions from the leakage signal. The application might also have range estimation functionality that will likely be inaccurate due to the influence of the leakage signal, particularly in the close-range distances such as distances less than about 20 cm, or distances less than about 10 cm. Presence detection is achieved by observing the behavior of the CIR near the leakage taps. The leakage contribution which originates from a static source possesses certain behaviors. By detecting the deviation of the measured radar signals, it is possible to detect the presence of an object. Various approaches could be used as the detection algorithm. Some examples include classical signal processing algorithms and machine learning approaches. Some example signal processing approaches could be a method to detect the changes in the shape of the leakage CIR. Such a method can compute some notion of distance to some stored templates of the pure leakage CIR, and if the resulting distance deviate by a certain threshold, a target is detected; otherwise no target is detected. Some example machine learning approaches could be any classifiers such as the k-nearest neighbor or support vector machine or even neural network-based classifiers. The classifier can be trained to recognize the behavior of pure leakage CIR, so that it can differentiate pure leakage CIR from non-pure leakage CIR (i.e., when there are one or more targets present). A processor can execute instructions to cause an electronic device, such as processor140of electronic device100inFIG.1, to undergo the series of operations described in flowchart1100. In operation1102, radar measurements for presence detection are obtained. The measurements may be obtained by a radar transceiver150fromFIG.1. A determination is made in operation1104as to whether a target's presence is detected. If the target's presence is not detected, then no object is within proximity of and within a field of view of the radar transceiver. Measurements corresponding to the leakage signal between the transmitter and the receiver are extracted in operation1106and used to update the stored leakage measurement in operation1108. If a target is detected in operation1104, then a possibility exists that the object could be within the proximity of and within the field of view of the radar transceiver. Accordingly, flowchart1100proceeds to operation1110and the stored leakage is not updated. FIG.12illustrates a flowchart for radar-based range estimation using an updated leakage response according to various embodiments of the present disclosure. A processor can execute instructions to cause an electronic device, such as processor140of electronic device100inFIG.1to undergo the series of operations described in flowchart1200for range estimation. In operation1202, radar measurements for presence detection are obtained. A determination is made in operation1204as to whether a target is detected. If a target is not detected, then the stored leakage measurement is updated in operation1206if needed. In a non-limiting embodiment, the stored leakage measurement is updated by extracting the leakage signal from the radar measurements obtained in operation1202. Returning back to operation1204, if a target is detected, then the target's range is estimated in operation1208using an updated leakage response that was previously obtained. FIG.13illustrates a flowchart for a leakage measurement update decision for radar-based face authentication according to various embodiments of the present disclosure. Flowchart1300describes the use of operational context data from a radar-based face authentication application for the leakage update decision. Radar measurements for face authentication are inputs into a face authentication algorithm and the output of the face authentication application contains the desired operational contextual data. For example, if the face authentication application successfully performs a radar measurement, whether the user is authenticated or not, then it can be assumed that a face was properly captured in the radar measurements without any obstructing objects in the environment positioned between the radar transceiver and the user's face. An illustration depicting a typical use case of an electronic device for face authentication is depicted inFIG.14. The radar measurements of the user's face contain leakage signals in the small delay taps that can be used for updating a leakage measurement. Using radar measurements for face authentication obtained in operation1302, a determination is made in operation1304as to whether face authentication is successfully completed. In one embodiment, the successful completion of face authentication is the authentication of a user on an electronic device executing the radar-based face authentication application. In another embodiment, successful completion of face authentication can be a rejection of the user's authentication attempt based on unobstructed radar measurements. If face authentication is successfully completed, then measurements corresponding to the leakage signal is extracted from the radar measurements in operation1306. A stored leakage measurement is updated with the extracted measurements in operation1308. If face authentication is not successfully completed in operation1304, then the leakage measurement is not updated in operation1310. FIG.14illustrates a user interacting with an electronic device for a radar-based face authentication according to various embodiments of the present disclosure. The electronic device1400, which is an electronic device such as device100inFIG.1, executes a radar-based authentication application (not shown) for authenticating user1402. The electronic device1400is maintained a distance D away from a face of user1402. Generally, the distance is between 20-50 cm, which ensures that objects are not present within a proximity to and within a field of view of the electronic device1400(i.e., between 0-20 cm from the electronic device). FIG.15illustrates a flowchart for a leakage measurement update decision for radar-based mood or heartbeat monitoring according to various embodiments of the present disclosure. Flowchart1500describes the use of operational contextual data from a radar-based mood or heartbeat monitoring application for the leakage update decision. Radar measurements can be used by a Type 1 application for monitoring a user's mood or heartbeat, an example of which is a mobile application for monitoring a driver for drowsiness or incapacity. Radar can be used to infer a driver's physical state based on physiological patterns such as heartbeat, breathing, etc. In this embodiment, the mobile device executing the Type 1 application can be placed on the dashboard facing the driver. In a typical use case, there will be no obstructing object between the radar transceiver and the driver. Flowchart1500begins with radar measurements for mood or heartbeat monitoring, which are obtained in operation1502. Using those radar measurements, operation1504determines whether leakage measurement can be updated based on signal strength and/or Doppler. Regarding the likelihood of clearance for leakage measurement, extra precautions can be incorporated to ensure a better quality of the captured measurements. For example, signal strength and Doppler information can be used to provide additional operational contextual data that can be used to determine if the vehicle is moving. Movement in the vehicle will manifest as vibrations in the electronic device, which are micro-movements relative to other objects in the vehicle. By confirming that there is no substantial energy in the leakage tap signal in non-zero Doppler bins, it can be inferred that there is no obstructing object near the radar transceiver and the leakage can be updated. In other words, objects that are within a proximity of and within a field-of-view of the radar transceiver will have a reflected energy levels in the leakage taps that exceeds background levels. Conversely, the lack of objects within the proximity of and within the field-of-view of the radar transceiver will have reflected energy in the leakage taps that are proportionate with background levels. The amount of energy in the non-zero Doppler bins at the small delay taps as the inverse of the confidence level. That is the stronger the energy, the less likely that the leakage can be updated. Confidence levels are discussed in more detail inFIGS.21and22that follow. If the leakage measurement can be updated, then a measurement corresponding to the leakage signal is extracted in operation1506from the radar measurements used in the mood or heartbeat application. However, if the leakage measurement cannot be updated based on the results of operation1504, then the leakage measurement is not updated in operation1510. FIG.16illustrates a flowchart for a leakage measurement update decision for applications using a non-radar sensor and a radar transceiver according to various embodiments of the present disclosure. A processor can execute instructions to cause an electronic device, such as processor140of electronic device100inFIG.1to undergo the operations described in flowchart1600, for making an update decision using non-radar sensors and radar-based sensors. In a particular embodiment, the non-radar sensor is an inertial sensor that can be used to determine movement of the electronic device, and subsequent analysis of the radar measurement from the radar transceiver can be used to determine if a new leakage measurement can be obtained. If the electronic device is in motion with respect to its surrounding, then the Doppler information and the signal strength can be used to detect if there is any obstacle in its vicinity. The device motion could also be inferred from the Type 1 application usage without using inertial sensors as described in more detail inFIG.15. Since the device is in motion with respect to its immediate surrounding, if there is an obstructing object in the vicinity of the radar antenna module, then the reflection from this object will possess non-zero Doppler. The leakage being the direct signal from the radar transmit antenna to the receive antenna which are rigidly installed on the device will be static with respect to each other. The leakage signal will fall into the zero Doppler bin. Thus, by confirming that there is no substantial energy in the leakage taps signal in non-zero Doppler bin, it can be inferred that there is no obstructing object near the radar transceiver and the leakage can be updated. Note that in this case one can use the amount of energy in the non-zero Doppler bins at the small delay taps as the inverse of the confidence level. That is the stronger the energy, the less likely that the leakage can be updated, a fact that can be used during the calculation of confidence levels. Flowchart1600begins with obtaining input from one or more sensors in operation1602. Using the sensor input, operation1604determines whether the device is in motion. If the device is not in motion, then the leakage update measurement is not updated in operation1606. However, if the device is in motion, then radar measurements are obtained in operation1608and the flowchart proceeds to operation1610where a determination is made as to whether signal strength and Doppler can be used to infer if a leakage measurement can be updated. If signal strength and Doppler can be used to infer that the leakage measurement can be updated, then the flowchart proceeds to operation1612where measurements corresponding to the leakage signal is extracted from the radar measurements. The leakage measurement is updated in operation1614. However, if at operation1610a determination is made that signal strength and Doppler can be used to infer that the leakage measurement cannot be updated, then the update leakage measurement is not updated in operation1616. FIG.17illustrates a general flowchart for a leakage measurement update decision for a non-radar application according to various embodiments of the present disclosure. A processor can execute instructions to cause an electronic device, such as processor140of electronic device100inFIG.1, to undergo the series of steps described in flowchart1700to make the leakage measurement update decision. Operational contextual data obtained in operation1702can be used to make a leakage measurement update decision in operation1704. In some embodiments the Type 2 application uses non-radar sensors, such as proximity sensors and inertial sensors, to obtain operational contextual data, and in other embodiments the operational contextual data is derived directly from the execution of the application. In either event, if the leakage measurement can be updated, then a radar leakage measurement is performed in operation1706. The radar leakage measurement is performed by activating the radar transceiver to perform a set of radar measurements that can be processed to obtain leakage measurements to update stored leakage measurements in operation1708. If the leakage measurements cannot be updated in operation1704, then the leakage measurements are not updated in operation1710. FIG.18illustrates a flowchart for a leakage measurement update decision for a non-radar application using sensors according to various embodiments of the present disclosure. A processor can execute instructions to cause an electronic device, such as processor140of electronic device100inFIG.1, to undergo the series of steps described in flowchart1800to make the leakage measurement update decision. Sensor measurements for a Type 2 application are obtained in operation1802. The sensor measurements can be captured directly from one or more sensors or derived from data captured by the one or more sensors. In operation1804a determination is made as to whether the leakage measurement can be updated based on the sensor measurements. If the leakage measurement can be updated, then a radar leakage measurement is performed in operation1806. The radar leakage measurement is performed by activating the radar transceiver to perform a set of radar measurements that can be processed to obtain leakage measurements to update stored leakage measurements in operation1808. If the leakage measurements cannot be updated in operation1804, then the leakage measurements are not updated in operation1810. FIG.19illustrates a flowchart of a process for a leakage measurement update decision for vision-based face authentication in a non-radar application according to various embodiments of the present disclosure. A processor can execute instructions to cause an electronic device, such as processor140of electronic device100inFIG.1, to undergo the series of steps described in flowchart1900to make the leakage measurement update decision based on a successful image capture. In particular, if a user's face is successfully captured, regardless of whether the user was actually authenticated, then an inference can be made that no objects are present between the electronic device and the user's face. This also means that the environment near the radar transceiver is clear for the leakage measurement. In some embodiments, the result of the vision-based authentication application can be a factor for consideration in a confidence level determination. For example, a successful authentication may be weighted higher than an unsuccessful authentication because an unsuccessful authentication could be attributed to additional factors, such as an unintended and undetected obstruction by a user's hand or fingers. To reduce or eliminate obstructions by a user's hand or fingers, additional sensor data can be captured and used to determine the location of the user's hand or fingers. For example, capacitive touch sensors can be used to detect grip, or infrared-based proximity sensors near the radar transceiver can be used. The sensor data can be incorporated into the computation of a confidence level as will be described inFIGS.21and22. Returning to flowchart1900, the process begins in step1902by capturing a camera image for a vision-based face authentication application. A determination is made as to whether the image capture was successful in step1904. If the image capture was successful, then a radar leakage measurement is performed in step1906and the stored leakage measurement update is updated in step1908. However, if at step1904a determination is made that the image capture was not successful, then the process continues to step1910and the stored leakage measurement update is not updated. While the exemplary embodiment described inFIG.19is related to face authentication, the steps of flowchart1900can be generally applied to other forms of biometric authentication, such as iris sensor authentication and fingerprint authentication, where operational contextual data obtained in step1902can be used to infer that there is no object in the vicinity of the radar transceiver for the purpose of leakage measurement. FIG.20illustrates a flowchart of a process for a leakage measurement update decision for proximity sensors in a non-radar application according to various embodiments of the present disclosure. A processor can execute instructions to cause an electronic device, such as processor140of electronic device100inFIG.1, to undergo the series of steps described in flowchart2000. In addition, sensors175can include one or more proximity sensors capable of capturing sensor data usable for making an update decision. Examples of proximity sensors include infrared, ultrasonic, laser, or capacitive based sensors or any other types of proximity sensing based on touch or hand grip, and even advanced methods like using image processing on camera image to identify objects and measure their distances. Proximity sensor data is obtained in step2002and used to make a determination if objects are in a vicinity of the radar transceiver in step2004. If objects are not in the vicinity of the radar transceiver, then a radar leakage measurement can be performed in step2006as described in earlier embodiments. A stored leakage measurement can be updated in step2008using the results of the radar leakage measurement before the process terminates. If a determination is made that an object is within the vicinity of the radar transceiver in step2004, then the leakage measurement is not updated in step2010and the process terminates. FIG.21illustrates a flowchart for integrating confidence levels into a leakage measurement update decision according to various embodiments of the present disclosure. A confidence level is a set of computed values that can be used to weight inputs to the leakage measurement update procedure. The confidence level can be computed by a processor in an electronic device, such as processor140of electronic device100inFIG.1from data captured by one or more sensors175or data originating from one or more of the applications164as previously discussed. Different approaches could be used to perform a leakage measurement update based on the confidence level. One example is to perform averaging weighted by the confidence level. Another possibility is to perform the averaging using weights computed using both the confidence level and the freshness of the measurement (e.g., determined from the recorded timestamp). In operation2102, radar measurements are obtained for a Type 1 application. A confidence level can be computed in operation2104based on the radar measurements and input into the leakage measurement update procedure of operation2108, which also takes into consideration a radar measurement corresponding to the leakage signal that is extracted in operation2106. While the flowchart inFIG.21is described relative to a Type 1 application, contextual operational data can be captured from Type 2 applications for use in computing a confidence level that can be used in making a leakage measurement update decision. For example, a confidence level can be computed for the vision-based face authentication application described inFIG.19, which takes into consideration not only that a successful image was captured for face authentication, but also whether the result of the face authentication was successful or not. A successful authentication may be given a higher confidence level than an unsuccessful authentication. FIG.22illustrates a flowchart for integrating a confidence level decision into a leakage measurement update decision according to various embodiments of the present disclosure. The leakage measurement update decision combines a soft decision and a hard decision based on the confidence level. The confidence level can be computed by a processor in an electronic device, such as processor140of electronic device100inFIG.1from data captured by one or more sensors175or data originating from one or more of the applications164as previously discussed. In operation2202, radar measurements are obtained for a Type 1 application. A confidence level is computed in operation2204based on those radar measurements. In operation2206, a determination is made as to whether the confidence level exceeds a threshold. If the confidence level exceeds the threshold, then in operation2208a measurement is extracted from the radar measurement that corresponds to the leakage signal. In operation2210, the stored leakage measurement is updated. However, if at operation2206the determination is made that the confidence level does not exceed the threshold, then in operation2212the stored leakage measurement is not updated. While the flowchart inFIG.22is described relative to a Type 1 application, contextual operational data can be captured from Type 2 applications for use in computing a confidence level that can be used in making a leakage measurement update decision. For example, a confidence level can be computed for the vision-based face authentication application described inFIG.19. In addition, confidence levels can be incorporated into the Type 2 applications discussed inFIGS.23and24which use contextual operational data derived from voice or video call applications. FIG.23illustrates a flowchart of a process for obtaining a leakage measurement update decision for a voice or video application according to an illustrative embodiment. The process can be implemented in a communication-enabled electronic device, such as a phone, a tablet, or a smart watch. Additionally, the process proceeds under the assumption that calls accepted while the electronic device is not in hands-free mode will be brought towards the user's face or suspended in midair by the user's hand so that the call can be conducted on speakerphone. A condition that hands-free mode is not used reduces the likelihood that the call might be accepted while the electronic device is maintained in a pocket. The process starts when a call for a voice or video application is received in step2302. A determination is made in step2304as to whether the call is accepted without hands-free mode. If the call is accepted without hands-free mode, then a radar leakage measurement is performed in step2306. A stored leakage measurement is updated with the new radar leakage measurement in step2308and the process ends. Returning to step2304, if a determination is made that the call is accepted without hands-free mode, then the stored leakage measurement is not updated in step2310and the process ends. In another embodiment, a rejection of the call without hands-free mode active could also be used to trigger the radar leakage measurement on the assumption that the user would be holding the electronic device in such a manner that would not introduce an object within the proximity of and within a field of view of the radar transceiver. In a variation of these embodiments, a time delay can be imposed after acceptance of the call before allowing performing the radar leakage measurement to ensure that the device is in midair without any obstruction within the proximity of the radar transceiver when the leakage measurement is captured. In yet another variation, a time window can be imposed for performing the radar leakage measurement in step2306to ensure that the leakage measurement is not obtained when the electronic device is proximate to or against a user's face. In another variation of the embodiment described inFIG.23, other non-radar applications can be substituted in place of the voice/video application as long as the other non-radar applications require a user to hold the electronic device in a particular position that could be used to infer that no objects are within a proximity of and within a field of view of a radar transceiver. For example, some gaming applications may require a user to place fingers into a position that does not obstruct the radar antenna module(s). FIG.24illustrates a flowchart of a process for an alternative leakage measurement update decision for a voice or video call application according to another illustrative embodiment. The process can be implemented in a communication-enabled electronic device, such as a phone, a tablet, or a smart watch, with hands-free mode active. Hands-free mode is active when the electronic device is connected to a user by wired or wireless headphones, which allows a user to accept or reject the call indirectly without regard to the position or location of the electronic device. For example, a user can accept a call with the phone in a pocket or face-down with the radar antenna module blocked. Additional contextual data may be necessary to determine whether the radar leakage measurement should be performed. Examples of contextual data can include data from proximity sensors, light detection sensors, positioning sensors that can be used to reduce the likelihood that radar leakage measurements will be performed when one or more objects are within a proximity to and within a field of view of the radar transceiver. The process described in flowchart2400starts when a call for a voice or video application is received in step2402. In step2404a determination is made as to whether the call is accepted with hands-free mode active. If the call is accepted with hands-free mode active, a radar leakage measurement is performed in step2406if contextual data permits. Thereafter, the stored leakage measurement is updated in step2408and the process terminates. If at step2404a determination is made that the call is not accepted with hands-free mode active, then the process does not update the leakage measurement in step2410and the process terminates. Confidence levels can also be computed for the embodiments described inFIGS.23and24. For example, positional sensors, light sensors, or proximity sensors providing operational contextual data consistent with an electronic device being present in a pocket or facedown on a surface can be used to compute a confidence level that disfavors the update of a stored leakage measurement update. FIG.25is a flowchart of a process for opportunistically updating a leakage response according to various embodiments of the present disclosure. A processor can execute instructions to cause an electronic device, such as processor140of electronic device100inFIG.1, to undergo the steps described in flowchart2500to opportunistically update a leakage response. The process begins in step2502by making a determination as to whether a change in at least one state variable is detected. The change in the state variable can be used to identify whether a stored leakage measurement associated with the at least one state variable is still valid. Non-limiting examples of state variables can include time, temperature, humidity, or any other device-related state that can affect radar transmissions in an electronic device. In some embodiments, the change in the at least one state variable is determined by identifying any change in the state variable. In other embodiments, the change in state variable may be a change that exceeds some threshold value. For example, the change in state variable may be the passage of a discrete amount of time, or a temperature that changes by more than a certain number of degrees or by a certain percent. If no change has been detected in step2502, then update of a stored leakage response is unnecessary and the process returns to the start. If a change in the at least one state variable has been detected, then in step2504a determination is made as to whether an object is within a proximity of and within a field-of-view of a radar transceiver. If an object is within the proximity of and within a field-of-view of the radar transceiver, then the radar signals within the leakage taps cannot be accurately attributable to either the leakage signal or the object within the proximity of the field-of-view of the radar transceiver. Accordingly, the process returns to the start. If an object is not within the proximity of nor within the field-of-view of the radar transceiver, then a leakage measurement is obtained in step2506. The leakage measurement can be obtained in any number of ways as described in earlier embodiments. For example, the leakage measurement can be obtained by extracting a set of signals from a radar measurement captured during the execution of Type 1 applications, or by activating the radar transceiver to perform a set of radar measurements that can be processed to obtain leakage measurements after or during the execution of a Type 2 application. In step2508, the leakage response is updated based on the leakage measurement. The updating can be a simple replacement or can incorporate averages as described earlier. In addition, the updating can incorporate confidence levels as previously described. After the leakage response is updated, the process terminates. As previously discussed in earlier embodiments, when the process of flowchart2500is applied to some Type 1 applications, the step of determining whether the object is within the proximity of and within the field of view of the radar transceiver involves performing a successful radar-based measurement on a target located outside of the proximity of the radar transceiver, and the step of obtaining the leakage measurement includes extracting signals from the successful radar-based measurement corresponding to a set of leakage taps. As previously discussed in earlier embodiments, when the process of flowchart2500is applied to some Type 1 applications that have access to operational contextual data that includes Doppler data, the step of determining whether the object is within the proximity of and within the field of view of the radar transceiver includes confirming that reflected energy from within the proximity of the radar transceiver is proportionate with background levels. As previously discussed in earlier embodiments, when the process of flowchart2500is applied to some Type 2 applications, the step of determining whether the object is within the proximity of and within the field of view of the radar transceiver includes performing a successful non-radar, sensor-based measurement on a target located outside of the proximity of the radar transceiver, and the step of obtaining the leakage measurement includes measuring a leakage signal between a transmitter and a receiver of the radar transceiver. As previously discussed in earlier embodiments, when the process of flowchart2500is applied to some Type 2 applications with access to operational contextual data from one or more proximity sensors, the step of determining whether the object is within the proximity of and within the field of view of the radar transceiver includes determining, with a non-radar, proximity sensor that a target is not detected within the proximity of the radar transceiver, and the step of obtaining the leakage measurement includes measuring a leakage signal between a transmitter and a receiver of the radar transceiver. As previously discussed in earlier embodiments, when the process of flowchart2500is applied to some Type 2 applications without access to operational contextual data from sensors, the step of determining whether the object is within the proximity of and within the field of view of the radar transceiver includes receiving a user input by the electronic device, the user input being correlated with an absence of any objects within the proximity of the radar transceiver. Examples of user input were described in more detail inFIGS.23and24and can include accepting or rejecting a voice or video call when the electronic device is not operating in hands-free mode. Another example of user input can be the movement of phone in three-dimensional space, such as when a user brings a phone towards the user's ear. In addition, the step of obtaining the leakage measurement further comprises measuring a leakage signal between a transmitter and a receiver of the radar transceiver. None of the description in this application should be read as implying that any particular element, step, or function is an essential element that must be included in the claim scope. Moreover, none of the claims is intended to invoke 35 U.S.C. § 112(f) unless the exact words “means for” are followed by a participle.
66,687
11860299
DETAILED DESCRIPTION According to an exemplary embodiment, the errors of the azimuth angle and/or of the elevation angle are reduced iteratively. In a first step of the iterative method, first erroneous uncompensated azimuth angles Φ0Aand elevation angles Φ0Eare determined from the measurements Δφ1, Δφ2, the values dTx, dR, dTydetermined by the antenna array, and the wave number k of the electromagnetic wave, with the aid of the above-mentioned Equations (1) and (2). With the aid of a priori knowledge, a first compensation value K(Φ0A, Φ0E) is then determined. The determination of the first compensation value is accomplished by means of a calculation or by readout from a memory in which a lookup table can be stored. Using the compensation value K(Φ0A, Φ0E), a first compensated elevation angle Φ1Eis then calculated with the following equation: ΦiE=Φ0E+K(Φi−1A, Φi−1E)   (4) The associated compensated azimuth angle ΦiAis then determined from the equation derived from Equation (2): ΔφAi=Δφ1cos⁡(ΦEi)→ΦAi=sin-1(ΔφAik⁢dR)(5) The compensated elevation angle calculated with the first compensation value and the first compensated azimuth angle calculated therewith are still erroneous. They can be used as input quantities for a second compensation step in which a second compensation value is determined through a second calculation or a second readout from a memory. Using the second compensation value, the second compensated elevation angle can then be calculated using Equation (4). The second compensated azimuth angle then results from use of Equation (5). Further compensation steps can follow. The method can be continued iteratively until the error is minimized such that further processing of the compensated azimuth angles and elevation angles is then reasonably possible. It has been demonstrated in an investigation that even two iterations are sufficient to largely compensate the systematic estimation error resulting from the coupling of the transmitting antennas. This is shown inFIGS.7ato7c. According to another exemplary embodiment, an erroneous elevation angle and an erroneous azimuth angle are first calculated from the measured phase differences by means of Equations (1) and (2). These erroneous quantities are used as input quantities for reading a compensated elevation angle out of a memory in which a lookup table can be stored. A compensated azimuth angle is then determined by means of the compensated elevation angle and the of Equation (2). The lookup table from which the compensated elevation angle can be read is based on measurements. For this purpose, a space around the antenna array can be sampled as finely as possible with the aid of a strong reflector, for example, which means that the reflector is displaced between two samples by an angular amount in height (elevation) or in the plane (azimuth). From the phase differences ΔΦ1, ΔΦ2measured in this process, erroneous elevation angles {circumflex over (Φ)}Eand azimuth angles {circumflex over (Φ)}Abased on the measurements are calculated by means of Equations (1) and (2). The actual elevation angle of the measurement arrangement can be uniquely assigned to these erroneous angles. This assignment of the erroneous elevation angle {circumflex over (Φ)}Eand the erroneous elevation azimuth angle {circumflex over (Φ)}Ato the actual elevation angle is then stored in the lookup table. The dependence of the actual elevation angle on the erroneous elevation angle {circumflex over (Φ)}Eand the erroneous elevation azimuth angle {circumflex over (Φ)}Ais shown graphically inFIG.4. A A If erroneous elevation angles {circumflex over (Φ)}Eand azimuth angles {circumflex over (Φ)}Aare produced later based on the measurement of the phase differences Δφ1, Δφ2, the actual elevation angle can be read out of the lookup table stored in a memory. The actual azimuth angle can then be determined by means of Equation (2). The measurements for determining the lookup table can be carried out for each antenna array, for example during a so-called EOL calibration. The measurement is then independent of production and component tolerances. However, great calibration effort is then required. Alternatively, measurements could also be performed on a sample of antenna arrays, the results of which are then applied to all antenna arrays. A lookup table is shown graphically inFIG.4. The single-valued region in the elevation direction has been chosen as +/−30°, which is to say 60°. It is evident inFIG.4that a symmetry is present in the opposite quadrants. This can be used to reduce the memory requirement. Since the table thus generated contains only discrete values, the determination of correction values at intermediate points takes place through interpolation, for example through linear interpolation. Despite the utilization of symmetry, a large memory is nonetheless necessary in order to store the lookup table with adequate accuracy. For example, if one assumes an angular resolution (step size) of 1° in the azimuth and elevation angle directions, and a 16-bit quantization of the values, i.e., 2 bytes per value, then with an angular range of +/−30° for elevation and +/−90° for the azimuth angle, the result is a memory requirement of 180*60/2*2 bytes=10.8 Kbytes. If one reduces the resolution from 1° to 5°, the memory requirement can be reduced to 432 bytes at the expense of accuracy. The average correction factor for various step sizes is shown inFIG.5. In another exemplary embodiment, the dependence of the actual elevation angle on the erroneous elevation angle {circumflex over (Φ)}Eand the erroneous azimuth angle {circumflex over (Φ)}A, as is graphically represented inFIG.4, is approximated by a polynomial. The coefficients that are obtained through this approximation can be stored in a memory. This will generally take place once during setup of the antenna array. The coefficients and the polynomial constitute the a priori knowledge that is used in the third method for compensation of the error. It has been shown that the accuracy of the determination of the coefficients, in particular the quantization and the order of the polynomial selected for the approximation, has a great effect on the quality of the approximation (seeFIG.6). In order to achieve a sufficiently accurate approximation, the inventor proposes a 5thorder polynomial for the two variables of the polynomial (i.e., in both angular directions), and a 32-bit quantization of the coefficients. This means that 21 coefficients of 32 bits are stored. This results in a memory requirement of approximately 48 bytes (21*4 bytes). The memory requirement is reduced by approximately 90% as compared to the second method with 5° angular resolution. However, this comes at the expense of an increased computing load for every raw target, since in this case a 5thorder polynomial must be evaluated for every raw target, resulting in 70 multiplications and 20 additions. The invention being thus described, it will be obvious that the same may be varied in many ways. Such variations are not to be regarded as a departure from the spirit and scope of the invention, and all such modifications as would be obvious to one skilled in the art are to be included within the scope of the following claims.
7,306
11860300
The drawings described herein are for illustration purposes only and are not intended to limit the scope of the present disclosure in any way. DETAILED DESCRIPTION The following description is merely exemplary in nature and is not intended to limit the present disclosure, application, or uses. It should be understood that throughout the drawings, corresponding reference numerals indicate like or corresponding parts and features. In addition, unless explicitly described to the contrary, the word “comprise” and variations such as “comprises” or “comprising” will be understood to imply the inclusion of stated elements but not the exclusion of any other elements. In addition, the terms “-er”, “-or”, and “module” described in the specification mean units for processing at least one function and operation, and can be implemented by hardware components or software components and combinations thereof. FIG.2schematically shows a configuration of a vehicle radar inspection system according to an exemplary form of the present disclosure. Referring toFIG.2, the vehicle radar inspection system according to an exemplary form of the present disclosure includes a wireless terminal10mounted in a vehicle, a radar sensor20, a centering portion30provided in a vehicle inspection line, a displacement sensor40, an array antenna50, a robot60, and a server70. The wireless terminal10is mounted in a vehicle that moves along an inspection line, and matches a unique identification (ID) and vehicle identification information of each vehicle. The radar sensor20is installed inside a front bumper of the vehicle, and is connected with a vehicle electronic control unit (ECU, not shown) through a communication line. The wireless terminal10is connected to a communication line in the vehicle through a connector and thus directly communicates with the radar sensor20or indirectly communicates with the radar sensor20through the vehicle ECU. The wireless terminal10may be provided as an on-board diagnostics (OBD) system, and receives a control signal (On/Off) for test radar signal transmission from the server70through antennas and transmits the received control signal to the radar sensor20. In addition, when a mounting error of the radar sensor20occurs, the wireless terminal10receives a sensor correction value from the server70and transmits the received value to the radar sensor20, and transmits a corrected sensor angle value of the radar sensor20to the server70. The radar sensor20includes a transmitting unit that transmits a radar signal forward, a receiving unit that receives a reflected radar signal, and a control module (MCU) that measures a distance to a frontal object, a speed and an angle by analyzing the reflected radar signal. The radar sensor20may set an angle value of a sensor to which the radar signal is transmitted, and adjusts a sensor angle according to the sensor correction value received from the server70on its own. For example, the sensor angle adjustment can be carried out by software that sets an offset according to the sensor correction value. However, the exemplary form of the present disclosure is not limited thereto, and the sensor angle can be mechanically adjusted by using a device that minutely adjusts an angle of each of the transmitting unit and the receiving unit. The centering portion30aligns a location of the vehicle according to a referenced inspection position of the radar sensor20by using a driving roller31. The centering portion30determines an alignment state of the vehicle through a vision sensor32provided at an upper side of the vehicle when tires of the vehicle are located on the driving roller31, and aligns the vehicle to the referenced inspection position by moving the driving roller31forward or backward when the vehicle is tilted left or right. For example,FIG.3shows a vehicle centering method according to the exemplary form of the present disclosure. Referring toFIG.3, the centering portion30extracts a virtual center line from a vehicle image area photographed through the vision sensor32, and calculates a tilted angle of the vehicle by comparing the extracted center line with the referenced inspection position. In addition, at least one of driving rollers31ato31dthat correspond to the four wheels of the vehicle is driven in a forward or backward direction to match the center line to a reference line, thereby correcting the tilted angle of the vehicle. In typical vehicle centering, two rollers on which the tires are located may be disposed in units of the front wheels and the rear wheels, but four rollers may be disposed in the centering portion30to adjust the tilted angle. Meanwhile, as previously described, a condition that the vehicle is horizontally aligned to the correct position must be assumed to reliably determine the mounting state of the radar sensor20. However, vehicles assembled at a factory may have a slight error in assembling various parts. For example, horizontal alignment of the same height may be difficult because bending or lifting of the vehicle occurs due to various reasons such as the size of the wheel, optional parts such as tires (including air pressure), weight of the vehicle body, leaning, and the like. Thus, as shown inFIG.2, the displacement sensor40according to the exemplary form of the present disclosure is provided in each of the front side and the rear side of the centering portion30to measure the height of the bottom of the aligned vehicle body and transmit the measured height to the server70. FIG.4shows a method for measuring a height and an angle of the vehicle by using the displacement sensor according to the exemplary form of the present disclosure. Referring toFIG.4, the displacement sensors40according to the exemplary form of the present disclosure measure the height of the bottom of the vehicle according to a receiving time of a signal reflected after transmitting one of ultrasonic waves, laser, and infrared rays. In this case, the bottom height of the vehicle may be measured at a plurality of locations from a front side of the vehicle to a rear side of the vehicle. Based on this, the server70virtually connects the plurality of height values of the vehicle, measured by the displacement sensors40, and generates a virtual vehicle body line. In addition, based on the floor, which is a horizontal plane, a bending angle or a lifted angle (hereinafter referred to as a vehicle correction angle) of the vehicle due to deviation of the virtual vehicle body line can be detected. The array antenna50measures propagation intensity of the radar signal transmitted from the radar sensor20through a plurality of antennas that are disposed at a front end of the robot60, and recognizes a spot where a radar signal having the strongest propagation intensity as a radar power center spot. FIG.5schematically shows a configuration of the array antenna50according to the exemplary form of the present disclosure. Referring toFIG.5, the array antenna50includes a vertical panel51, horn antennas52disposed in plural at a front side of the vertical panel51, an image sensor53disposed at a front center of the vertical panel51, and a mounting portion54that is provided at a rear side of the vertical panel51and combined with the front end of the robot60. Each of the horn antennas52has an opening in the shape of a trunk tube, and may be arranged in a lattice format, which includes a plurality of columns and rows. InFIG.5, for convenience of description, two horn antennas52are mounted in columns and rows in the vertical panel51, but the number of horn antennas52is not limited thereto. The horn antennas52that are disposed in the plurality of columns may be used to detect a mounting error interval and a mounting error angle in the vertical direction of the radar sensor20. In addition, the horn antennas52that are disposed in the plurality of rows may be used to detect a mounting error interval and a mounting error angle in the horizontal direction of the radar sensor20. FIG.6shows a radar center measuring method by using the array antenna50according to the exemplary form of the present disclosure. Referring toFIG.6, the array antenna50where two or more horn antennas52are vertically disposed is located at an inspection position P, which is away by a predetermined distance a from the radar sensor20, and the radar signal is transmitted to detect an actually-measured radar power center value C. The array antenna50measures power of electromagnetic waves transmitted from the radar sensor20for each horn antenna52and then collected to measure a radar power center value C at a location where the strongest power is measured. In this case, the measured radar power center value C is compared with a reference center value of a radar center specification to detect a mounting error angle value θ of the radar sensor20. Here, the mounting error angle value θ implies an error, which is a deviation with respect to the center specification, and at the same time has a meaning as a correction value for matching the radar power center value C with the reference center value. Unlike a conventional radar measurement inspection in which a value reflected by a radar correction target at a distance in the front of the vehicle is determined to correct an angle of a radar sensor, the radar inspection method according to the exemplary form of the present disclosure derives a correction angle by using a value received at the array antenna50. Such an array antenna50can carry out inspection at an inspection position P that is within 1 m of the radar sensor20. Thus, it has an advantage that the inspection space can be reduced compared to the conventional radar measurement inspection method. In addition, an electromagnetic wave absorber that is provided in the radar correction target can be omitted, and thus installation cost can be saved, and even when the array antenna50is moved for radar signal transmission, the radar power center can be measured in real time. Referring back toFIG.2, the robot60is provided as a multi-joint manipulator that is capable of kinematic posture control, and the array antenna50is mounted at the front end thereof. The robot60can move the array antenna50to a primary inspection position P1that is disposed at a first distance a from the radar sensor20and a secondary inspection position P2that is disposed at a second distance a′ from the radar sensor20according to an applied posture control signal. In this case, the robot60recognizes a center of the radar cover formed in the grill in the front portion of the vehicle through the image sensor53that is disposed at a center of the front side of the vertical panel51, and horizontally aligns the center of the cover and a center of the array antenna50. That is, when the mounting state of the radar sensor20is inspected, the array antenna50can be moved to the primary inspection portion P1and the secondary inspection portion P2while being horizontally aligned with the center of the radar cover by the robot60. The server70is provided as computer equipment that controls the entire operation of each element in the system for vehicle radar inspection according to the exemplary form of the present disclosure. FIG.7is a schematic block diagram of the server according to the exemplary form of the present disclosure. Referring toFIG.7, the server70according to the exemplary form of the present disclosure includes a communication unit71, an interface unit72, a robot controller73, a database74, and a controller75. The communication unit71is connected with the wireless terminal10of the vehicle through antennas, and transmits a control signal (On/Off) for radar signal transmission of the radar sensor20. In addition, the communication unit71generates a sensor correction value when a mounting error of the radar sensor20occurs and transmits the generated sensor correction value to the radar sensor20, and receives a response of completion of sensor correction. The interface unit72connects the server70and peripheral devices provided in a vehicle radar inspection process for interworking therebetween. The interface unit72connects communication with the centering portion30to determine a tilted angle of the vehicle, by the server70through the vision sensor32, and supports control of the vehicle centering by operation of the driving rollers31. In addition, the interface unit72connects communication with the displacement sensors40to receive a correction angle of the vehicle body according to a bending or lifting state of the centered vehicle. Such a vehicle body correction angle can be used in correction of a radar mounting error, which will be calculated later. The robot controller73stores kinematic information for posture control of the robot60, and locates the array antenna50at the primary inspection position P1and the secondary inspection position P2through posture control of the robot60. The robot controller73recognizes a radar cover center of the centered vehicle through the image sensor53, and aligns the center of the array antenna50with reference to the center of the cover through the posture control of the robot60. The robot controller73controls the posture of the robot60during a primary radar measurement to locate the array antenna50on the primary inspection position P1, and moves the array antenna50horizontally during a secondary radar measurement to locate the array antenna50on the secondary inspection position P2. The database74stores various data and programs for inspection of the radar sensor20, and stores data generated from the inspection of a radar sensor20for each vehicle. For example, the database74stores a radar sensor mounting position in design drawings for different vehicles, and stores centering information for different vehicles, reference mounting specification information for different vehicles, primary and secondary inspection portion setting information, and the like. In addition, the database74matches an ID and vehicle identification information of the wireless terminal10and stores the result, and stores a result of radar sensor inspection of a vehicle where the wireless terminal10is loaded. The controller75is a central processing unit that controls the entire operation of each element for the vehicle radar sensor inspection according to the exemplary form of the present disclosure. That is, a configuration of each part may be hardware, software, or a combination of hardware and software, and each function and role thereof may be operated or interworked with the control of the controller75. FIG.8shows a method for calculating a mounting position and an angle of the radar sensor according to the exemplary form of the present disclosure. Referring toFIG.8, radar measurement values of a reference radar sensor20anormally mounted at a reference mounting position of the vehicle and an actual radar sensor20bat an actual mounting position are compared. The controller75sets a reference mounting specification for error detection by modelling measurement information of the reference radar sensor20athat is normally mounted to the vehicle, and compares the measurement information of the reference radar sensor20awith measurement information of the actual radar sensor20b, actually measured in the inspection line, to detect a mounting position error value x and a mounting error angle value θ. When inspection is started, the controller75sequentially locates an array antenna50of the actual radar sensor20bon the primary inspection position P1and the secondary inspection position P2, and measures a primary radar center value C1and a secondary radar center value C2. Here, a denotes a distance (hereinafter referred to as a first distance) between the radar sensors20aand20band the primary inspection portion P1, a′ denotes a second distance between the radar sensors20aand20band the secondary inspection position P2, b denotes a distance deviation (hereinafter referred to as a primary distance deviation) with respect to the mounting specification corresponding to the primary radar center value C1measured at the primary inspection position P1, b′ denotes a distance deviation (hereinafter referred to as a secondary distance deviation) with respect to the mounting specification corresponding to the secondary radar center value C2measured at the secondary inspection portion P2, x denotes a mounting error height value x of the actual radar sensor20bwith respect to a normal mounting position of the reference radar sensor20a, and θ denotes a tilted mounting error angle value θ of the actual radar sensor20b. The controller75measures the primary radar center value C1and the secondary radar center value C2of the actual radar sensor20b, and calculates a mounting position of the actual radar sensor20bby using a trigonometric function that refers to the first distance a and the second distance a′. In addition, the controller75compares a radar center value of the actual radar sensor20band a horizontal center line from the mounting position of the actual radar sensor20b, and detects the mounting error angle value θ of the actual radar sensor20b. In this case, the controller75can calculate the mounting error angle value θ and the mounting error height value x in a state that the actual radar sensor20bis bent downward (b<b′) or in a state that the actual radar sensor20bis bent upward (b>b′). For example, as shown inFIG.8, the controller75can calculate a mounting error height value x through Equation 1 when the actual radar sensor20bis bent downward (b<b′). θ=arctan⁢(❘"\[LeftBracketingBar]"b′-b❘"\[RightBracketingBar]"❘"\[LeftBracketingBar]"a′-a❘"\[RightBracketingBar]")⁢x=b′-a′⁢tan⁢θ(Equation⁢1) In addition, the controller75can calculate a mounting error height value x through Equation 2 when the actual radar sensor20bis bent upward (b>b′). θ=arctan⁢(❘"\[LeftBracketingBar]"b′-b❘"\[RightBracketingBar]"❘"\[LeftBracketingBar]"a′-a❘"\[RightBracketingBar]")⁢x=b′+a′⁢tan⁢θ(Equation⁢2) In the above description that refers toFIG.8, a method for calculating errors by measuring vertical mounting positions and angles of the radar sensor20has been described, but errors in horizontal mounting positions and angles of the radar sensor20can be measured by using the same method. Meanwhile, the controller75can derive a final mounting error by reflecting a vehicle body correction angle detected by the displacement sensors40to at least one of the mounting position error height value x and the mounting error angle value θ. In addition, the controller75determines successful inspection when the final mounting error satisfies a predetermined mounting specification, but if it is not satisfied, a re-mounting process is carried out through angle adjustment of the radar sensor20or a repair process. Meanwhile, referring toFIG.9, a vehicle inspection method according to the exemplary form of the present disclosure will be described based on the above-described configuration of the vehicle radar inspection system. However, the above-described constituent elements of the server70can be integrated or further subdivided, and thus not the corresponding constituent elements but the server70will be mainly described in each stage in description of the vehicle radar inspection method according to the exemplary form of the present disclosure. FIG.9is a schematic flowchart of the vehicle radar inspection method according to the exemplary form of the present disclosure. Referring toFIG.9, the server70according to the exemplary form of the present disclosure connects communication with a wireless terminal10of a vehicle that enters the inspection line, and aligns the vehicle at the reference inspection position of the radar sensor through the centering portion30(S1). In this case, the server70determines an alignment state of the vehicle through the vision sensor32disposed above the vehicle while tires of the vehicle are mounted on the driving rollers31. In addition, when the vehicle is misaligned left/right, the driving rollers31operate forward or backward to align the vehicle in line with the reference inspection position. The server70measures a height of a lower portion of the vehicle at a plurality of spots through the displacement sensors40to generate a virtual vehicle body line, and detects a vehicle body correction angle with reference to a horizontal plane (S2). The server70locates the array antenna50at the primary inspection position P1at the first distance a from the radar sensor20of the vehicle by posture control of the robot60, and transmits a radar signal to measure a primary radar center value C1(S3). In this case, the server70transmits a control signal for radar signal transmission through the wireless terminal10of the vehicle to operate the radar sensor20. The server70locates the array antenna50at the secondary inspection portion P2at the second distance a′ from the radar sensor20of the vehicle, and transmits a radar signal to measure a secondary radar center value C2(S4). The server70calculates a mounting position of the radar sensor20by using a trigonometric function that refers to at least one of the primary radar center value C1, the secondary radar center value C2, the first distance a, and the second distance a′ (S5). The server70calculates a mounting position error value x and a mounting error angle value θ by comparing the calculated mounting position of the radar sensor with a mounting specification (S6). The server70derives a final mounting error by reflecting a correction value according to the vehicle body correction angle detected by the displacement sensors40to at least one of the mounting position error height value x and the mounting error angle value θ (S7). The server70determines that the radar sensor20is normally mounted when the final mounting error satisfies the predetermined mounting specification (S8; Yes), and terminates the inspection process. On the other hand, in S8, when the final mounting error does not satisfy the predetermined mounting specification (S8; No), the server70determines whether the final mounting error is within a range that can be corrected by the radar sensor20(S9). In this case, when the final mounting error is within the range that can be corrected by the radar sensor20(S9; Yes), the server70generates a radar sensor correction value for correcting the final mounting error and transmits the generated radar sensor correction value to the radar sensor20through the wireless terminal10(S10). On the other hand, in S9, when the final mounting error is not included within the range that can be corrected by the radar sensor20(S9; No), a repair process starts and thus a bumper is separated and then the radar sensor20is re-mounted (S12). As described above, according to the exemplary form of the present disclosure, the radar signal center value is measured through the array antenna that receives a radar signal at regular intervals, assembly tolerance can be detected by calculating errors of mounting position and an angle of the radar sensor, and a recognition error of the radar sensor can be corrected. Accordingly, the inspection system can reduce the cost of warranty claims. In addition, it is effective to measure the radar center value in the array antenna rather than to measure the signal reflected on a conventional radar correction target. In addition, it is effective to shorten transmission/reception distance of the radar signal and to simply inspect the mounting position of the radar sensor in a narrow space. Further, the server in the inspection line automatically controls the radar sensor and peripheral devices of the vehicle such that there is an advantage of reducing the workload of the final inspection line. While this present disclosure has been described in connection with what is presently considered to be practical exemplary forms, it is to be understood that the present disclosure is not limited to the disclosed forms, but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the present disclosure. DESCRIPTION OF SYMBOLS 10wireless terminal20: radar sensor30centering portion40: displacement sensor50array antenna51: vertical panel52: horn antenna53: image sensor54: mounting portion60: robot70server71: communication unit72: interface unit73: robot controller74: database75: controller
24,523
11860301
DETAILED DESCRIPTION Exemplary embodiments of the present invention provide a device and a method for allowing a distance sensor test bench to be tested with little technical effort and cost. In an exemplary embodiment, a testing device for testing a distance sensor that operates using electromagnetic waves is provided. In a test mode, a test signal unit generates a test signal, and the test signal or a test signal derived from the test signal is radiated as an output signal via the radiating element, an analysis unit analyzing the receive signal or the derived receive signal in terms of its phase angle and/or amplitude and storing the determined value of phase angle and/or amplitude in the test mode synchronously with the radiation of the test signal or of the derived test signal as an output signal. The implementation of the test signal unit makes it possible to generate a test signal and radiate it via the radiating element into the external space as needed. This is not possible with the testing devices that form the basis of the invention because they only radiate previously received signals with a suitable time delay. With the measures described, the testing device can be left in its installed position for the testing of a distance sensor test bench; i.e., the testing device does not need to be replaced with a special transmitter. In exemplary embodiments, the transmitted test signal or the test signal derived from the test signal is reflected in the distance sensor test bench assembly and received by the testing device. Thus, the receive signal then received by the testing device corresponds to the transmitted and reflected signal in the test mode. This receive signal or the receive signal derived therefrom can then be analyzed in terms of its phase angle and/or amplitude by the analysis unit. The analysis is usually performed with respect to a reference signal, which may be, for example, the transmitted test signal. In distance sensor test benches having a folded optical path, the testing device and the distance sensor to be tested (or the antenna to be measured) are often spaced apart so that an overall V-shaped optical path is formed. In this case, a preferred embodiment of the testing device is advantageously characterized in that the delay unit, the test signal unit, and the analysis unit are enclosed by a housing, and the receiving element is connected to the housing via a signal line, and thus can be positioned remotely from the housing of the testing device. When it is said that the receiving element is connected to the housing via a signal line, this means, of course, a connection that enables transfer of the receive signal or of a signal derived from the receive signal into the interior of the housing for purposes of further electronic signal processing. With this embodiment, it is possible to place the receiving element in the installed position of the distance sensor to be tested, so that the planeness of the electromagnetic waves can be tested at the testing location of the distance sensor, and solely by the testing device. The testing device itself does not need to perform any analysis of the planeness of the electromagnetic waves; rather, the receive signal or the derived receive signal is analyzed in terms of its phase angle and/or in terms of its amplitude, and corresponding measurement values are held available in the testing device, be it in order to actually perform a further analysis in the testing device itself, or be it to transfer the measurement values via a suitable interface to, for example, an external computer for analyzing the planeness of the received waves therein. Another preferred embodiment of the testing device provides that the receiving element has a mixer, and that the receive signal be down-converted to a lower intermediate frequency by the mixer. The low-frequency receive signal derived from the receive signal in this manner is transferred via the signal cable at least to the analysis unit enclosed by the housing. This example illustrates why a differentiation is made between the terms “receive signal” and “receive signal derived from the receive signal.” The receive signal per se has its origin in the free-space wave picked up by the receiving element. If further signal processing is performed before the receive signal is passed on to one of the further signal processing units (i.e., delay units or analysis unit), then strictly speaking the signal in question is no longer the receive signal itself, but a receive signal derived therefrom. This is the case in the aforementioned exemplary embodiment, where the receive signal is down-converted to a lower intermediate frequency. The advantage of the down-conversion to a lower intermediate frequency is that the transfer of these lower-frequency signals places less demands on the transmission path from the receiving element via the signal line to the further electronic units in the housing of the testing device. This explanation also clarifies the differentiation between the test signal generated by the test signal unit and a test signal possibly derived therefrom, which is then radiated as an output signal. Accordingly, it could be provided for the test signal to be generated with a relatively low frequency—corresponding to the down-converted intermediate frequency and to be up-converted by a mixer prior to being radiated via the radiating element. As mentioned earlier, the analysis unit determines the phase angle of the receive signal or the phase angle of the derived receive signal preferably with respect to a reference signal, which may in particular be the test signal or the derived test signal. An advantageous embodiment of the testing device provides that the analysis unit analyze the receive signal or the receive signal derived from the receive signal in terms of its phase angle through propagation time measurement with respect to the test signal radiated as an output signal or with respect to the derived test signal radiated as an output signal. There are known various methods of how propagation time information can be obtained through smart analysis of the transmitted signal and of the reflected receive signal that originates from the transmitted signal, for example, by using a frequency-modulated signal (chirp signal) as the test signal and mixing the transmit signal and the receive signal so that propagation time information can be readily obtained from the then determined frequency difference. In particular, it is provided that the test signal or the derived test signal be a pulse, a pulse train, a continuous wave signal, or a frequency-modulated continuous wave signal. An advantageous refinement of the testing device is characterized in that the analysis unit analyzes a plurality of receive signals or a plurality of derived receive signals in terms of phase angle and/or in terms of amplitude and stores a plurality of values of phase angle and/or amplitude. In the real measurement process, the plurality of values of phase angle and/or amplitude result from the fact that the position of the receiving element in the test mode corresponds to the installed position of the distance sensor to be tested, and that this position is varied. Thus, the quiet zone of the distance sensor that is normally to be tested can be scanned and measured linearly or two-dimensionally by positional variation of one or two position variables. The deviations of the measured phase angles and/or amplitudes from measurement point to measurement point are a measure of the planeness of the incoming waves in the quiet zone. The deviations allow a conclusion to be drawn as to whether the distance sensor test bench to be tested still meets the accuracy requirements or needs to be recalibrated. The analysis of the measured phase angles and/or amplitudes with respect to wave planeness can be, but does not have to be, performed in the testing device. In an advantageous embodiment, the testing device has a communication interface via which an external computer is connectable to the testing device. The testing device, particularly the analysis unit of the testing device, then transfers at least one value of phase angle and/or amplitude of the receive signal or of the derived receive signal via this communication interface to the external computer. There, the phase angles and/or amplitudes of the receive signal can be analyzed, and conclusions can be drawn about the planeness of the waves and about the alignment of the various components of the distance sensor test bench. An advantageous refinement of the aforementioned embodiment provides that the testing device generate the test signal in response to an external request received via the communication interface, and radiate the test signal or a test signal derived from the test signal as an output signal via the radiating element, and analyze the receive signal or the derived receive signal in terms of its phase angle and/or in terms of its amplitude synchronously with the radiation of the output signal. In another exemplary embodiment, a method is provided for testing a distance sensor test bench having a folded optical path, the distance sensor test bench including a testing device for testing a distance sensor that operates using electromagnetic waves, a beam deflector, and a holding and positioning device for receiving a distance sensor to be tested in a mounting fixture. The testing device includes a receiving element, a radiating element, a delay unit, a test signal unit, and an analysis unit, the receiving element serving for receiving an electromagnetic free-space wave as a receive signal, and the radiating element serving for radiating an electromagnetic output signal. In the known simulation mode, the receive signal or a receive signal derived from the receive signal is fed through the delay unit with a settable time delay during the testing of the distance sensor, and is thus delayed to form a delayed signal as a simulated reflected signal. In order to test the distance sensor, the delayed signal or a signal derived from the delayed signal is radiated as an output signal via the radiating element. In a test mode, a test signal unit generates a test signal, and the test signal or a test signal derived from the test signal is radiated as an output signal via the radiating element. In the test mode, the analysis unit analyzes the receive signal or the derived receive signal in terms of its phase angle and/or its amplitude synchronously with the radiation of the test signal or of the derived test signal as an output signal. The determined value of phase angle and/or the amplitude is stored. Furthermore, in terms of device design, it is provided that the delay unit, the test signal unit, and the analysis unit be enclosed by a housing, the housing being stationarily disposed in the distance sensor test bench. The receiving element of the testing device is connected to the housing via a signal line, and thus can be positioned remotely from the housing of the testing device. For purposes of testing the distance sensor test bench, the receiving element of the testing device is placed in the mounting fixture of the holding and positioning device, a plurality of test positions being approached via the positioning device along an axis or in a plane in front of the receiving element in the mounting fixture of the holding and positioning device. At least one test operation is performed in each of a plurality of test positions, and thus a plurality of phase angles and/or amplitudes are determined, and the determined values of phase angle and/or amplitude are stored, in particular together with the position coordinate or the position coordinates of the approached test positions. The axis or the plane in which the approached test positions lie in front of the receiving element ideally extend perpendicular to the expected direction of travel of the incoming wave. The planeness of the wave in the quiet zone in front of the distance sensor that is actually to be tested can be inferred based on the phase angles and/or amplitudes determined in different positions. The determination of the planeness of the wave may be performed in the testing device itself, but the acquired data may also be transferred via an interface to an external computer and analyzed therein. The described testing device and the described method according to the independent patent claims may be refined and designed and in a variety of specific ways. This is illustrated in connection with the figures. All figures show a testing device1for testing a distance sensor2that operates using electromagnetic waves, withFIGS.1aand1bshowing testing devices1known from the prior art. In their basic function, which is a simulation mode, all testing devices1serve to simulate an object spaced apart from distance sensor2, this spaced-apart object being simulated to a distance sensor2to be tested. The depicted testing devices1all have a receiving element3for receiving an electromagnetic free-space wave as a receive signal SRXand a radiating element4for radiating an electromagnetic output signal STX. In a simulation mode, the receive signal SRXor a receive signal S′RXderived from the receive signal SRXis fed through a delay unit5with a settable time delay tdelay, set, and is thus delayed to form a delayed signal Sdelayas a simulated reflected signal. The delayed signal Sdelayor a delayed signal S′delayderived from the delayed signal Sdelayis then radiated as an output signal STXvia radiating element4. FIGS.1aand1billustrate the use of such a testing device1such as is known in the prior art. Here, testing device1is part of a distance sensor test bench6having a folded optical path. In distance sensor test bench6, the testing device1for testing a distance sensor2that operates using electromagnetic waves is positioned together with a beam deflector7and a holding and positioning device8for receiving a distance sensor2to be tested. InFIG.1, it can be seen that the distance sensor2to be tested—for example, during an end-of-line test—emits an electromagnetic wave having a curved wavefront. Beam deflector7is parabolically shaped and serves to shape the reflected waves into waves having a plane wavefront. This is symbolized by the curved and parallel lines. In the simulation mode of distance sensor test bench6, testing device1serves to simulate an object spaced an arbitrary distance apart in the sensing area of the distance sensor2to be tested. Via receiving element3, testing device1receives the free-space wave SRXemitted by distance sensor2and feeds the receive signal SRXor a receive signal S′RXderived from the receive signal SRXto delay unit5. A time delay tdelay, setis settable for delay unit5, and delay unit5then delays the receive signal SRXor the derived receive signal S′RXaccording to the set delay tdelay, setto form a delayed signal Sdelay. This delayed signal Sdelayor a signal S′delayderived from the delayed signal Sdelayis then radiated as an output signal STXvia radiating element4toward beam deflector7, as illustrated inFIG.1B. What is relevant here is that testing device1also radiates output signal STXas a free-space wave having a curved wavefront. Beam deflector7then in turn causes the free-space wave reflected by it to have a plane wavefront after reflection, which is of particular importance in this direction. As already explained at the outset, when large object distances are simulated, it is not only important that the simulated reflected signal be suitably delayed by delay unit5, but it is also important that the wavefront of the simulated reflected signal be plane, which is characteristic of far fields for purely geometric reasons. Especially in the case of distance sensors2having a plurality of receiving elements3, the phase angles Phi of waves received by neighboring receiving elements3can be analyzed. If phase angle differences are detected that do not correlate with the time delay of the reflected signal (and thus with the object distance determined via the time delay), then this can lead to misinterpretation or even to error conditions of the distance sensor2to be tested. This is why the creation of a plane wavefront by beam deflector7is essential, especially in this propagation direction. Even small differences between the various elements of the illustrated distance sensor test bench6may have as a consequence that the wavefronts are no longer plane in the measurement area directly in front of the distance sensor2to be tested; i.e., in the so-called quiet zone, and that the wavefronts per se are curved or come in at an angle. In order to ensure proper functioning of the illustrated distance sensor test bench6, the calibration of distance sensor test bench6is rechecked at periodic intervals. To this end, generally, both the testing device1and the distance sensor2are removed from their respective installed positions and replaced with a corresponding testing apparatus, which includes a transmitter at the installed position of testing device1and a corresponding receiver in the installed position of the distance sensor2that is normally to be tested. The receiver can then be moved by holding and positioning device8in a measurement plane substantially perpendicular to the desired or expected incoming direction of the electromagnetic free-space wave, and the receiver then detects the phase angle and often also the amplitude of the incoming free-space wave at different positions, so that the planeness of the incoming free-space wave can be assessed. If the planeness of the incoming electromagnetic waves in the quiet zone of the receiver does not meet the requirements, the positioning of the various elements of distance sensor test bench6must be corrected. The previously described check for a plane wavefront in the quiet zone in front of the installed position of the distance sensor2to be tested, which is located in holding and positioning device8in the simulation mode, is very complex and costly. The testing devices1illustrated inFIGS.2through6enable the distance sensor test bench6, as shown inFIGS.1through6, to be tested itself, namely as to whether the incoming waves in the quiet zone in front of the installed position of the distance sensor2to be tested have a plane phase front. Thus, using the testing device1described below, it is no longer necessary to completely change the set-up of distance sensor test bench6. Rather, it is sufficient to slightly modify the set-up. FIGS.2through5only show a suitably designed testing device1having an extended functionality that enables the previously described checking of the calibration of distance sensor test bench6. Each testing device1is characterized in that, in addition to the already known delay unit5, it has a test signal unit9which generates a test signal Stestin a test mode, the test signal Stestor a test signal S′testderived from the test signal Stestbeing radiated as an output signal STXvia radiating element4. Through this measure, testing device1, regardless of whether it receives a receive signal SRXvia its receiving element3, is basically capable of radiating a test signal Stestvia its radiating element4, which can then be used to subject the space in front of the distance sensor2that is normally to be tested to a test signal suitable for measuring the planeness of the waves. Synchronously with the radiation of the test signal Stestor of the derived test signal S′testas an output signal STX, an analysis unit10analyzes the synchronously received receive signal SRXor the derived receive signal S′RXin terms of its phase angle Phi and/or in terms of its amplitude A. This is expressed in the figures by the notation Phi(SRX/S′RX) and A(SRX/S′RX), so the slash is by no means to be understood as a fraction bar, but as a separator indicating an alternative. The determined value of phase angle Phi and/or amplitude A is then stored. This embodiment of testing device1basically makes it possible to actively generate a test signal Stest(instead of merely using a previously measured signal), and to analyze a resulting receive signal SRXin terms of its phase angle phi and/or in terms of its amplitude A, which is essential for determining the planeness of the incoming free-space wave. InFIGS.2through6, the different electronic units, namely delay unit5, test signal unit9, and analysis unit10, are schematically disposed within a box that is to be understood in a purely functional sense. What is important here is merely that delay unit5and analysis unit10each be capable of receiving the receive signal SRXor the signal S′RXderived from the receive signal SRXand that delay unit5and test signal unit9correspondingly be capable of accessing the signal line connected to radiating element4so as to output corresponding signals. It will be appreciated that in various embodiments, the different units may be implemented in a common hardware unit, for example on a common field-programmable gate array (FPGA), in a plurality of hardware and functional units (for example, a plurality of FPGAs or a plurality of signal processors), or may be implemented to be partly digital and partly analog or completely digital. FIG.3exemplarily illustrates what is meant when it says that the receive signal SRXor a receive signal S′RXderived from the receive signal is fed to delay unit5. It is shown that the receive signal SRXreceived as a free-space wave via receiving element3is down-converted by an input mixer11to a lower intermediate frequency, and that thus the signal fed to delay unit5or analysis unit10is not the originally received receive signal SRX, but the receive signal S′RXderived by down-conversion. The equivalent applies respectively to the delayed signal Sdelayand the delayed signal S′delayderived by up-conversion from the delayed signal Sdelay, and to the test signal Stestgenerated by analysis unit10or the test signal S′testderived through up-conversion by an output mixer12, which is then radiated as an output signal STX. The advantage of these specific embodiments is that the demands placed on the speed of signal processing in delay unit5, test signal unit9, and analysis unit10are lower than if, for example, the receive signal SRXhad to be processed directly or if corresponding high-frequency delayed signals Sdelayor test signals Stesthad to be generated directly. In the exemplary embodiments shown inFIGS.2through6, delay unit5, test signal unit9, and analysis unit10are enclosed by a housing13. InFIGS.4through6, receiving element3is connected to the housing via a signal line14, which allows receiving element3to be positioned remotely from housing13of testing device1. This has the advantage that receiving element3can be very easily moved to the position in distance sensor test bench6where the distance sensor2to be tested is usually located; i.e. in the normal simulation mode, namely in a mounting fixture of holding and positioning device8, as illustrated inFIG.6. In the exemplary embodiments illustrated inFIGS.2through6, analysis unit10determines the phase angle Phi of the receive signal SRXor the phase angle Phi of the derived receive signal S′RXwith respect to a reference signal, which is the test signal Stestor the derived test signal S′test, respectively. Thus, according to the previously introduced and explained notation, the phase angle Phi is then a function of the transmit signal and of the receive signal; i.e., Phi(SRX/S′RX, Stest/S′test). The test signal units9shown in the various exemplary embodiments generate different test signals Stest, namely in the forms of a pulse, a pulse train, a continuous wave signal, and a frequency-modulated continuous wave signal. Accordingly, in the various exemplary embodiments, analysis unit10analyzes the receive signal SRXor the derived receive signal S′RXin terms of its phase angle Phi in different ways, namely via a phase detector or directly through propagation time measurement with respect to the test signal Stestradiated as an output signal STXor with respect to the derived test signal S′testradiated as an output signal STX. The analysis units10of the testing devices shown inFIGS.2through6analyze a plurality of receive signals SRXor a plurality of derived receive signals S′RXin terms of phase angle Phi and/or in terms of amplitude A and store a plurality of values of phase angle Phi and/or amplitude A. Determining a plurality of values of phase angle Phi and/or amplitude A is useful because the position of receiving element3is usually varied in the test mode. Thus, the quiet zone of the distance sensor2to be tested in the simulation mode is scanned and measured linearly or two-dimensionally by positional variation of one or two position variables, as indicated inFIG.6by the double-headed arrow at holding and positioning device8. The deviations of the measured phase angles Phi and/or amplitudes A from measurement point to measurement point are a measure of the planeness of the incoming waves in the quiet zone. The deviations allow a conclusion to be drawn as to whether the distance sensor test bench6to be tested still meets the accuracy requirements or needs to be recalibrated. The deviations may be determined in the testing device1itself, but this does not have to be the case. The testing device1ofFIG.5has a communication interface15via which testing device1can be connected to an external computer. Testing device1or, more precisely, analysis unit10then transfers the value of phase angle Phi and/or amplitude A of the receive signal SRXor of the derived receive signal S′RXvia communication interface15to the external computer. The external computer can then perform the analysis with respect to the planeness of the received wave, especially if a plurality of determined phase angles and/or amplitudes have been transferred. The testing device1ofFIG.5is also used for transferring control commands, particularly from an external computer to testing device1. Thus, in particular, testing device1may in particular receive, via the communication interface, an external request that causes test signal unit9to generate the test signal Stestwhich is then radiated—possibly in the form of a derived test signal S′test—as an output signal STXvia radiating element4. Synchronously with the radiation of the output signal STX, analysis unit10analyzes the receive signal SRXor the derived receive signal S′RXin terms of its phase angle Phi and/or amplitude A. Receive signal SRXwas caused by the transmitted test signal Stest. FIG.6not only depicts a distance sensor test bench6having a folded optical path, but also illustrates the above-described method16for testing this distance sensor test bench6. Distance sensor test bench6has a testing device1for testing a distance sensor2that operates using electromagnetic waves, a beam deflector7, and a holding and positioning device8for receiving a distance sensor2to be tested in a mounting fixture. As described earlier, testing device1includes a receiving element3, a radiating element4, a delay unit5, a test signal unit9, and an analysis unit10, the receiving element3serving for receiving an electromagnetic free-space wave as a receive signal SRX, and the radiating element4serving for radiating an electromagnetic output signal STX. In the simulation mode, the receive signal SRXor a receive signal S′RXderived from the receive signal SRXis fed through delay unit5with a settable time delay tdelay, setduring the testing of distance sensor2, and is thereby delayed to form a delayed signal Sdelayas a simulated reflected signal. In order to test distance sensor2, the delayed signal Sdelayor a signal S′delayderived from the delayed signal Sdelayis radiated as an output signal (STX) via radiating element4. In the test mode that is actually of interest, test signal unit9generates a test signal Stest, and the test signal Stestor a test signal S′testderived from the test signal Stestis radiated as an output signal STXvia radiating element4. Synchronously with the radiation of the test signal Stestor of the derived test signal S′testas an output signal STX, analysis unit10analyzes the receive signal SRXor the derived receive signal S′RXin terms of its phase angle Phi and/or its amplitude A. Synchronous analysis means that the transmission of the test signal and the analysis of the receive signal are interrelated in time since the receive signal is usually the transmitted reflected test signal. For example, when the propagation time of a pulse is determined, transmission and analysis are performed synchronously, but slightly after one another in time. When a frequency-modulated continuous wave signal is used as a test signal, the transmission of the signal and the analysis of the receive signal are actually performed overlapping in time since the two signals are mixed together. In any case, the then determined value of phase angle Phi and/or amplitude A is stored, which is necessarily the case because the result of the calculation must be available in some form in analysis unit10from an information technology perspective. Delay unit5, test signal unit9, and analysis unit10are enclosed by a housing13in the illustrated exemplary embodiments, including the exemplary embodiment ofFIG.6. Testing device1and its housing13is stationarily disposed in distance sensor test bench6. Receiving element3is connected to housing13via a signal line14, and thus can be positioned remotely from housing13of testing device1. This allows receiving element3of testing device1to be placed in the mounting fixture of holding and positioning device8for purposes of testing distance sensor test bench6. A plurality of test positions are approached in a plane in front of the receiving element3located in the mounting fixture of holding and positioning device8. For this purpose, holding and positioning device8has suitable actuators that allow for accurate spatial positioning, especially of the mounting fixture of holding and positioning device8or of holding and positioning device8in its entirety. At least one test operation is performed in each of a plurality of test positions, and in each case at least a phase angle Phi and/or an amplitude A is determined, and the determined value of phase angle Phi and/or amplitude A is stored. Thus, with only a few additional device features that go beyond what is anyway required for the simulation mode, method16enables distance sensor test bench6to be tested; i.e., to check whether it is still calibrated such that plane wavefronts are present in a plane in front of the installed position of the distance sensor2to be tested. While subject matter of the present disclosure has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive. Any statement made herein characterizing the invention is also to be considered illustrative or exemplary and not restrictive as the invention is defined by the claims. It will be understood that changes and modifications may be made, by those of ordinary skill in the art, within the scope of the following claims, which may include any combination of features from different embodiments described above. The terms used in the claims should be construed to have the broadest reasonable interpretation consistent with the foregoing description. For example, the use of the article “a” or “the” in introducing an element should not be interpreted as being exclusive of a plurality of elements. Likewise, the recitation of “or” should be interpreted as being inclusive, such that the recitation of “A or B” is not exclusive of “A and B,” unless it is clear from the context or the foregoing description that only one of A and B is intended. Further, the recitation of “at least one of A, B and C” should be interpreted as one or more of a group of elements consisting of A, B and C, and should not be interpreted as requiring at least one of each of the listed elements A, B and C, regardless of whether A, B and C are related as categories or otherwise. Moreover, the recitation of “A, B and/or C” or “at least one of A, B or C” should be interpreted as including any singular entity from the listed elements, e.g., A, any subset from the listed elements, e.g., A and B, or the entire list of elements A, B and C. LIST OF REFERENCE CHARACTERS 1testing device2distance sensor3receiving element4radiating element5delay unit6distance sensor test bench7beam deflector8holding and positioning device9test signal unit10analysis unit11input mixer12output mixer13housing14signal line15communication interface16methodSRX, S′RXreceive signal, derived receive signalSTX, S′TXoutput signal, derived output signaltdelay, setset time delaySdelay, S′delaydelayed signal, derived delayed signalStest, S′testtest signal, derived test signalPhi phase angleA amplitude
33,078
11860302
DETAILED DESCRIPTION Various examples will now be described more fully with reference to the accompanying drawings in which some examples are illustrated. In the figures, the thicknesses of lines, layers and/or regions may be exaggerated for clarity. Accordingly, while further examples are capable of various modifications and alternative forms, some particular examples thereof are shown in the figures and will subsequently be described in detail. However, this detailed description does not limit further examples to the particular forms described. Further examples may cover all modifications, equivalents, and alternatives falling within the scope of the disclosure. Same or like numbers refer to like or similar elements throughout the description of the figures, which may be implemented identically or in modified form when compared to one another while providing for the same or a similar functionality. It will be understood that when an element is referred to as being “connected” or “coupled” to another element, the elements may be directly connected or coupled or via one or more intervening elements. If two elements A and B are combined using an “or”, this is to be understood to disclose all possible combinations, i.e. only A, only B as well as A and B, if not explicitly or implicitly defined otherwise. An alternative wording for the same combinations is “at least one of A and B” or “A and/or B”. The same applies, mutatis mutandis, for combinations of more than two Elements. The terminology used herein for the purpose of describing particular examples is not intended to be limiting for further examples. Whenever a singular form such as “a,” “an” and “the” is used and using only a single element is neither explicitly or implicitly defined as being mandatory, further examples may also use plural elements to implement the same functionality. Likewise, when a functionality is subsequently described as being implemented using multiple elements, further examples may implement the same functionality using a single element or processing entity. It will be further understood that the terms “comprises,” “comprising,” “includes” and/or “including,” when used, specify the presence of the stated features, integers, steps, operations, processes, acts, elements and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, processes, acts, elements, components and/or any group thereof. Unless otherwise defined, all terms (including technical and scientific terms) are used herein in their ordinary meaning of the art to which the examples belong. Safety may be regarded as an important aspect of driving assistant and autonomous driving systems. To ensure reliability of such concepts and to pretend, for example, autonomously driving cars from crashing with surrounding objects, the surrounding objects may be characterized by position, distance, orientation, velocity, acceleration, rotation, or a combination thereof. In case of automotive applications, the surrounding objects typically can be pedestrians, cars, motorcycles or trucks. For characterizing those objects, commonly location/distance sensors like LIDAR systems, Radar systems, cameras or ultra sonic sensors can be used. Measurement samples coming from such location sensors can be used to characterize an object, for example, regarding its shape, position, orientation, velocity and/or acceleration. To get a higher accuracy, a single measurement (e.g. of the object's position or orientation) may be filtered over time to get a higher accuracy and to estimate derivations of the states (e.g. velocity out of position displacement over time). Conventional filtering techniques include Kalman Filters or Particle Filters. In autonomous driving, a common source of information can be laser scanners, which can provide detailed information about the shape of the object (contour). In case of traffic participants, objects can be represented by a box. There are conventional methods extracting these boxes with its parameters (e.g. position of box, orientation). To incorporate the measurement information (box parameters) it is helpful to model its accuracy (usually expressed by a variance in 1D, otherwise by a covariance matrix). Conventional require a covariance matrix, describing the variance of the different measured line parameters and their covariance (correlation). The system covariance matrix is helpful to get not only a precise estimation of the object's state, but as well represents the accuracy of the estimated states and the correlation of the state(s). Decisions of following (downstream) processing modules are usually based on the accuracy of the states. Conventional approaches usually use heuristics or look-up-tables for the covariance matrix and ignore covariance between different parameters. While embodiments of the present disclosure will be described with regard to automotive applications, the skilled person having benefit from the present disclosure will appreciate that the presented concepts can be used in other scenarios as well, such as for airplanes or for boats, for example. FIG.1schematically shows a vehicle110in accordance with embodiments of the present disclosure. The vehicle includes an apparatus120comprising one or more location/distance sensors122and a processing circuit121. The location/distance sensors122can be LIDAR, Radar, or ultrasonic sensors, or a combination thereof, transmitting signals and receiving the signals reflected from a remote object130. The location/distance sensors122are coupled to the processing circuit121in order to transfer measurement data. Measurement data can be, for example, indicative of coordinates of reflection points of the object's surface. The processing circuit121is configured to compute, based on the measurement samples, line parameters of one or more lines fitting the coordinates of the measurement points to obtain a characterization of the object130, for example with regard to its position/distance, orientation, velocity, acceleration or rotation. An example of fitting boxes or lines to coordinates of measurement points21,22is illustrated inFIG.2. The measurement points21,22may be assigned to one or more edges of the object depending on their positional arrangement. In the example ofFIG.2the measurement points21,22are assigned to two edges of object130, thereby assuming that the object130can be sufficiently characterized by the two edges. The edges may be modeled by respective lines23and24. Thus, a line fitting process may fit the two lines23,24to the coordinates of the measurement points21,22. In some examples, the lines23,24can be assumed to be orthogonal to each other. In this case the respective orientation of the two lines23,24can be defined by the same line parameters. The skilled person having benefit of the present disclosure will appreciate that the line fitting is also possible for one or more than two lines and for “non-orthogonal” lines. Further examples of line fitting processes are “I-shape-fitting” for one line, “U-shape-fitting” for three lines of a rectangle or fitting of a complete rectangle. In some example implementations, the line parameters of the lines23,24can be computed based on a least squares fit. For example, a (constraint) least square algorithm can be used for that. In constrained least squares one solves a linear least squares problem with an additional constraint on the solution. I.e., the unconstrained equation r=Ap(1) has to be fit as closely as possible (in the least squares sense) while ensuring that some other property of p is maintained. The linear equation system (1) is based on a system matrix A including the coordinates of the measurement points21,22. In the illustrated example the coordinates are Cartesian coordinates in the form (x, y). The vector r denotes deviations (distances) of the measurement points21,22from the (fitted) lines23and24. The line parameters defining the lines23,24are included in vector p. For the example of two orthogonal lines23,24the line parameters can be expressed as: p=[n1, n2, c1, c2]T. The two line parameters n1, n2define the slope/orientation of the (orthogonal) lines23,24. A third line parameter c1and a fourth line parameter c2correspond to a position of the lines23,24. The slope or orientation is determined by [n1n2] for the first line23and by [n2-n1] for the second line24. The y-intercepts of the lines can be defined according to: y⁡(x=0)=-c1n2 (for line23) and y⁡(x=0)=c2n1 (for line24). For the present example with two orthogonal lines23,24, the system matrix A can be represented according to: A=[xj=0,i=1yj=0,i=110⋮⋮⋮⋮xj=0,i=Nyj=0,i=N10-yj=1,i=1xj=1,i=M01⋮⋮⋮⋮-yj=1⁢i=Mxj=1,i=M01] wherein xj,i∈{xj=0,i=1, . . . , xj=0,i=N} and yj,i∈{yj=0,i=1, . . . , yj=0,i=N} denote the i-th of N measurement coordinates assigned to a first line j=0 and xj,i∈{xj=1,i=1, . . . , xj=1,i=M} and yj,i∈{yj=1,i=1, . . . , yj=1,i=M} denote the i-th of M coordinates of measurement points of a second line j=1. Hence a system of linear equation can be set up according to: ri=n1xj,i+n2yj,i+c1(for the first line 23) orri=n2xj,i−n1yj,i+c2(for the second orthogonal line 24), wherein riis the distance of the i-th measurement point from the respective line23,24defined by line parameters n1, n2, c1and c2. The determination of the line parameters p=[n1, n2, c1, c2]T is known and provides lines23,24approaching the object's130external contours. For example, an intersection point (xi, yi) between lines23and24can be used as a reference point for the object's position. To get a higher accuracy, a single measurement of the lines23,24or another quantity derived thereof may be filtered over time to get a higher accuracy. Conventional filtering techniques include Kalman Filters or Particle Filters. The principle of such filters is to estimate a state of an object based on adequately weighting previous and current measurements in order to achieve an optimal estimation of an object's state. The object's state may be any state of interest, such as its position, distance, orientation, velocity, acceleration, rotation, etc. FIG.3illustrates an operation principle of a Kalman filter for the example of tracking an object's130state xk(e.g. position, orientation, etc.). Kalman filtering, also known as linear quadratic estimation (LQE), is an algorithm that uses a series of measurements ykobserved over timeframes k, containing statistical noise32(and other inaccuracies), and produces estimatesof unknown variables that tend to be more accurate than those based on a single measurement alone, by estimating a joint probability distribution over the variables for each timeframe. The algorithm works in a two-phase process. In the prediction phase, the Kalman filter produces estimatesof the current state variables, along with their uncertainties33. Once the outcome of the next measurement yk(necessarily corrupted with some amount of error, including random noise) is observed, these estimates are updated (−→) using a weighted average, with more weight being given to estimates with higher certainty. The algorithm is recursive. It can run in real time, using the present input measurements ykand the previously calculated state−and its uncertainty31; no additional past information is required. The estimateof the current state can be computed according to: =-+Kk⁡(yk-C⁢⁢-)(2)whereinKk=Pk-⁢CTC⁢Pk-⁢CT+R,(3) denotes a weight coefficient calculated from a variance or covariance Pk−(reference numeral31) of the object's previously estimated state−a measurement variance or covariance R (reference numeral32) and a measurement coefficient C. The measurement coefficient C expresses a relation of the measurement sampleand a true state of the tracked stateaccording to: =C For the present example the object's130position can directly be measured (e.g., intersection point (xi, yi)). Thus C is according to: C=1. The above calculation of Kkincludes both an uncertainty Pk−of the previously calculated state−and an uncertainty R of the current measurement yk. The coefficient Kkrepresents a contribution of the current measurement ykand the previously calculated state−to the estimateof the current state. A relatively large value of Kkcan indicate a relatively low uncertainty (high certainty) of the current measurement ykcompared to the uncertainty of the previously calculated state−. In such a case the estimateof the current state is predominantly determined by the current measurement yk. For a small value of Kk, the estimateof the current state is predominantly determined by the previously calculated state−. Conventional methods of adjusting the uncertainties Pk−and/or R use look-up-tables or heuristics, which often include predefined uncertainties. The predefined uncertainties can be selected based on a measurement accuracy of the sensors122and/or distance between the sensors122and the object130, for example. Methods using such predefined uncertainties do not ensure a reliable adaption of the filters to the actual uncertainties of the line fitting process (yielding the current measurements yk). This may be the case if an uncertainty of the line fitting process changes rapidly, for example due to a partly obstructed line of sight towards the object130or changing weather conditions. It is one finding of the present disclosure that a tracking filter can operate more accurately when the adaptation of weight coefficient Kk. gets based on an online computation of the uncertainty R of current measurements ykinstead on relying on predefined values. Here, the term “online” means that the uncertainty is determined along with determining the current measurements, e.g. along with determining the current measurement of lines23,24defined by line parameters p=[n1, n2, c1, c2]T, the position (xi, yi), and/or the orientation. The present disclosure provides a method for estimating such uncertainties, expressed by covariances and variances of the fitted lines, simultaneously with respect to the measurement samples and the line parameters. Depending on the dimension of the measurements, the covariances can be scalars or matrices. A high-level flowchart400of such a method for characterizing an object130based on measurement samples from one or more location sensors122is shown inFIG.4. The skilled person having benefit from the present disclosure will appreciate that method400can be implemented by an appropriately configured apparatus120ofFIG.1. Method400includes computing410line parameters (e.g., p=[n1, n2, c1, c2]T) of one or more lines23,24fitting the measurement samples21,22to obtain a characterization (e.g., position, orientation) of the object130. Further, method400includes computing420a system covariance matrix Σsysincluding variances of the respective line parameters and covariances between different line parameters based on the system matrix A including coordinates of the measurement samples21,22and based on deviations r of the measurement samples21,22from the one or more lines23,24. For the example scenario ofFIG.2, the system covariance matrix Σsyscan be computed according to: ∑s⁢y⁢s=rT⁢rN+M-4⁢(AT⁢A)-1,(4) wherein (N+M−4) denotes the cumulated number of measurement samples21,22of both lines23,24minus the number of degrees of freedom. In this case the degrees of freedom are equal to four due to the four line parameters [n1, n2, c1, c2]T. The skilled person will appreciate that in the illustrated example with the two orthogonal lines23,24the covariance matrix Σsysis a 4×4 matrix having the respective variances of the four line parameters n1, n2, c1, C2on its main diagonal and covariances between different line parameters on its minor diagonals. The variance of a random variable can be regarded as a measure of the variability of the random variable. Likewise, the covariance is a measure of the joint variability of two random variables. The system covariance matrix Σsyscan be regarded as a measure of the uncertainty of the of the current measurement of lines23,24. In this sense it can be used similar to R in above equation (3) for updating the weight coefficient of a Kalman filter while tracking lines/edges23,24, for example. Optionally, the system covariance matrix Σsyscan also be used to determine further variances or covariances of arbitrary points located on the fitted lines23,24according to: Σa=[pa]TΣsys[pa]  (5) wherein pa=[xa, ya, 1, 0]Tdenotes a point on the first line23or a point pa=[−yaxa, 0,1]Ton the second line24. Such points can be expressed in form of a four-dimensional vector to solve the linear functions according to: 0=n1xa+n2ya+1·c1=pT·pafor the first line23 or 0=n2xa−n1ya+1·c2=pT·pafor the second line24, orthogonal to the first line and for a vector of line parameters according to: p=[n1, n2, c1, c2]T. The covariance Σacan be regarded as a measure of the uncertainty of the current measurement of point pa. In this sense it can be used similar to R in above equation (3) for updating the weight coefficient of a Kalman filter while tracking measurement point pa, for example. Further optionally, a covariance matrix Σirelated to the intersection point (xi, yi) between lines23and24can be determined in a similar way. The intersection corresponds to a point (xi, yi) located on both lines. Hence point coordinates for both linear equations of the respective lines can be used to compute the covariance matrix according to: Σi[p1,p2]TΣsys[p1,p2]  (6) wherein p1and p2can be expressed as p1=[xi, yi, 1, 0]Tfor the first line23and p2=[−yi, xi, 1, 0]Tfor the second line24. The skilled person will appreciate that in the illustrated example with the two orthogonal lines23,24the covariance matrix Σiis a 2×2 matrix having the respective variances of x- and y-coordinates its main diagonal and the covariances between them next to it. The mentioned variances and covariances are illustrated graphically in the right chart ofFIG.2in the form of black ellipses around the intersection point (xi, yi). Σican be regarded as a measure of the uncertainty of the intersection point (xi, yi). In this sense it can be used similar to R in above equation (3) for updating the weight coefficient of a Kalman filter while tracking the intersection point (xi, yi), for example. FIG.5schematically illustrates different measurement scenarios and their influence on the uncertainty of a resulting reference point used for update in tracking position (dot) and covariance (circle around dot). Sketch “A)” illustrates a scenario with only one measurement sample at a corner of object130serving as the reference point for tracking. Here, no computation according to equations (4) and (6) is possible. The reference point has the same uncertainty/covariance as the sensor(s) (Rpos=Σsensor). Further, the covariance is without correlation between x- and y-position. Sketch “B)” illustrates a scenario with a relatively small number of measurement samples21,22not close to the corner, but enough for setting up linear equation system (1), estimating lines23,24, estimating the system covariance matrix Σsysaccording to equation (4), estimating intersection point (xi, yi) as reference point for tracking, and estimating the covariance matrix Σiaccording to equation (6). The small number of measurement samples21,22not too close to the corner can lead to a rather high uncertainty of the current measurement of reference point (xi, yi) (Rpos=Σi>Σsensor). The covariance is relatively large with correlation between x- and y-position. Sketch “C)” illustrates a scenario with a higher number of measurement samples21,22closer to the corner, leading to a lower uncertainty of the current measurement of reference point (xi, yi) compared to sketch “B)”. The resulting covariance is relatively small with correlation between x- and y-position (Rpos=Σi<Σsensor). According to some example embodiments, the object130can additionally or alternatively be characterized by its (angular) orientation ϕ, which is schematically illustrated inFIG.6. As can bee seen, the object's orientation ϕ can be measured using the line parameters [n1, n2, c1, c2]Tof the fitted lines23,24according to: Φ=atan⁡(n1n2) Using the variances and covariances contained in the system covariance matrix Σsys, it is possible to compute the variance σΦ2of the orientation ϕ based on a combination of the line parameters n1, n2, variances of said line parameters n1, n2, and covariances of said line parameters. In a particular embodiment related to estimating two orthogonal lines23,24, the variance of the orientation ϕ can be computed according to σΦ2=σn⁢12⁢n22+σn⁢22⁢n12+2⁢c⁢o⁢v⁡(n1,n2)⁢n1⁢n2n12+n22(7) wherein σn12and σn22denote the variances of the respective line parameters n1, n2, and cov(n1, n2) denotes the corresponding covariance. σΦ2can be regarded as a measure of the uncertainty of the current measurement of orientation ϕ. In this sense it can be used similar to R in above equation (3) for updating the weight coefficient of a Kalman filter while tracking the orientation ϕ, for example. FIG.7schematically illustrates different measurement scenarios and their influence on the uncertainty of the orientation ϕ. Sketch “A)” illustrates a scenario with only one measurement sample21at a corner of object130. Here, no computation according to equations (4) and (7) is possible. Since the single measurement does not yield any information about the object's orientation, σΦ,A2can be assumed infinite. Sketch “B)” illustrates a scenario with a small number of measurement samples21,22, enough for setting up linear equation system (1), estimating lines23,24, estimating the system covariance matrix Σsysaccording to equation (4), and estimating σΦ,B2according to equation (7). The small number of measurement samples21,22will lead to a lower uncertainty of the current measurement of ϕ compared to sketch “A”, however. Sketch “C)” illustrates a scenario with a higher number of measurement samples21,22leading to a lower uncertainty of the current measurement ϕ compared to sketch “B)”, i.e. σΦ,B2>σΦ,C2. The above calculations of the various variances and/or covariances may further be useful to estimate the quality of the fitting process. Large variances and/or covariances may indicate an inaccurate characterization of an object and vice versa. Thus, for example, for automotive implementations the variances and/or covariances can be used to ensure the safety of autonomous driving. For example, an autonomously driving car may decelerate earlier in order to maintain a safety distance with regard to an inaccurate characterization of another car driving in front of the autonomously driving car. Further, as mentioned above, the variances and/or covariances can be used to improve tracking of moving objects by adjusting the coefficient Kkof a Kalman filter. Especially for a characterization of moving objects, such as cars, a tracking filter can be adapted to rapidly changing measurement uncertainties. In order to explain possible benefits of the proposed online estimation of the various variances and/or covariances,FIG.8illustrates an example traffic situation of a turning object130, for example a car. The example illustrates a left turn. Due to the rotation of the object130, an orientation of the edges of the object130changes. Depending on the object's orientation, a different number of measurement points characterizing the edges may be “visible”. In a first phase denoted by “1” the object130moves towards a location/distance sensor122detecting the edges of the object130. In phase “1”, the location sensor122only sees the front edge of object130and barely sees the side edge(s) of object130. Thus the line fitting will be rather poor and the uncertainty of the fitting process is large. In a next phase denoted by “2” the object130turned a bit more. In phase “2”, the location sensor122thus “sees” both the front edge and the side edge of object130. As a result the lines23,24modeling the edges of the object130can be fitted more precisely and the uncertainty of the fitting process is lower. In a next phase denoted by “3” the object130turned yet a bit more and the location sensor122only sees the side edge of object130and barely sees the front edge of object130. Hence the uncertainty of the fitting process increases again. Chart a) shown inFIG.8shows a tracking process with online uncertainty (covariance) estimation in accordance with the different phases 1, 2, and 3. In chart a) the tracked state (e.g. orientation) of the object130fits the true state quite well during all phases since measurements with correct covariance estimation are used. Chart “b)” illustrates a tracking process with an underestimated measurement noise in phases 1 and 3 by using a conventional method, such as a look-up-table, for determining the uncertainty. Here, the current measurements ykare weighted too much compared to the compared to the previously calculated state−. This can lead to a change in the estimateof the current state with low latency, but with high noise. This can be very uncomfortable control due to high noise. Chart “c)” illustrates a tracking process with an overestimated measurement noise in phase 2 by using a conventional method, such as a look-up-table, for determining the uncertainty. Here, the current measurements ykare weighted too little compared to the previously calculated state−. This can lead to a change in the estimateof the current state with high latency and strong damping. This can lead to slow reactions on critical maneuvers. Chart “a)” ofFIG.8shows a desired behavior of a tracking filter which can be provided by embodiments of the present disclosure. While the uncertainty of the fitting process and thus the covariance for phases “1” and “3” is large, the tracking filter characterizes the object predominantly using previously calculated states−, which are more precise than the current measurements ykof the line fitting process. During the object130being in phase “2” the tracking filter “trusts” the measurement measurements ykmore since the uncertainty of the line fitting process and thus the covariance is low. The mentioned process of chart “a)” inFIG.8of tracking an orientation of a turning object can be achieved by an adaptive estimation of a the coefficient Kkadjusting a Kalman filter. A constant estimation of such a coefficient from look-up-tables or heuristics cannot achieve similar results of tracking a turning object, for example. Hence adjusting a discrete time filter by computing the system covariance matrix Σsysas a baseline for further covariances Σa, Σi, or σΦ2can provide a more reliable and precise tracking compared to conventional methods based on look-up-tables or heuristics. FIG.9provides an overview of the previously described concept of covariance estimation. InFIG.9, reference numeral910denotes a process of estimating one or more characteristics (e.g., position, orientation, etc.) of the one or more surrounding objects130based on measurement samples from the one or more location sensors122. At911, clusters of measurement samples can be assigned to different objects130using known algorithms, such as density based clustering (DBSCAN) or K-Means. At912, measurement samples of a cluster can be assigned to different edges of the corresponding object. This can be done by using known algorithms, such as principal component analysis (PCA) or random sample consensus (RANSAC) algorithm. At913, the object130can be characterized by fitting lines to the measurement samples of the respective edges. The fitting process913can be for example a (constraint) least square fit setting up the system matrix A from the measurement samples and solving the linear equation system (1) based on the system matrix A and the vector p including the line parameters [n1, n2, c1, c2]T. The eigenvector corresponding to the lowest eigenvalue of system matrix A corresponds to the vector of line parameters. At914, an orientation ϕ of the lines23,24and an intersection point (xi, yi) (reference point) of the two lines23,24can be determined. Embodiments of the present disclosure propose to additionally determine the respective covariance σΦ2, Σias respective uncertainty of the measurements ϕ, (xi, yi). This additional process is denoted by reference numeral920inFIG.9. At921, the system covariance matrix Σsysis estimated, serving as a baseline for computing σΦ2at924and for computing Σiat922. Optionally, the covariance Σaof arbitrary point (xa, ya) can also be computed at923. At930, the measured orientation ϕ of the object together with its related covariance σΦ2and the measured reference point (xi, yi) together with its related covariance Σican be used for tracking the respective object130. With increasing accuracy or spatial resolution of location/distance sensors, future location/distance sensors (such as laser scanners) are likely to output an enormous number of detections/measurement samples. This can prohibitively increase the computational complexity for estimating object characteristics based on the (constraint) least square fit according to equation (1) since the dimensions of the system matrix A including the measurement samples increase enormously. Thus a processing time may correlate to the number of detections/measurement samples. According to another aspect of the present disclosure, which can also be combined with aforementioned aspects, it is proposed reduce the computational complexity for estimating object characteristics based on the (constraint) least square fit by registering the measurement samples in a grid map with discrete cell size. It can be shown, if the cell size is not significant larger than the sensor measurement uncertainty, an equal performance can be reached compared to directly using all detections. For this purpose embodiments provide a method for characterizing an object based on measurement samples from one or more location sensors. A schematic flowchart of the method1000is shown inFIG.10. The measurement samples have a first spatial resolution, e.g. corresponding to a spatial resolution of the one or more location/distance sensors. Method1000includes quantizing1010the measurement samples to a grid map of weighted cells having a second spatial resolution, which is lower than the first spatial resolution. A measurement sample contributes to a weight coefficient of one or more weighted cells depending on a measurement accuracy of the respective measurement sample. Method1000further includes computing1020parameters of one or more lines23,24fitting the weighted cells to obtain a characterization of the object130. That is, the measurement samples with high resolution are mapped to cells of the grid map with a lower resolution. In this way, the dimensions of the system matrix system matrix A can be reduced from a high number of measurement samples to a lower number of cells. The skilled person having benefit from the present disclosure will appreciate that method1000can be implemented by an appropriately configured apparatus120ofFIG.1. As one can see inFIG.11in order to quantize the measurement samples21,22, they are registered in the grid map1100of cells with a spatial resolution which is lower than the spatial resolution of the measurement samples21,22. Each cell of the grid map1100can be characterized by its coordinates (e.g., corresponding to the cell center), while a characterization of the cell is not limited to Cartesian coordinates, but may also work by using different coordinate systems (e.g. polar coordinates). Further each cell can be characterized by a weight coefficient. Each measurement sample21,22contributes weight to the cell its falls within. Further it can also contribute weight to surrounding cells, depending on the measurement uncertainty (measurement covariance). The weights can be calculated using a predefined probability density function, optionally with normalization. The cell weight is equal to the probability that the detection/measurement is in this cell. For the ease of understanding, this is explained inFIG.12for an exemplary 1D case. FIG.12illustrates a series of measurement samples21. The weight coefficient of a weighted cell1210is determined based on a probability distribution1220around the measurement sample. The probability distribution1220can be based on the measurement accuracy of the one or more location sensors122, for example. In the illustrated example the probability distribution1220corresponds to a Gaussian probability distribution having a predefined standard deviation depending on the measurement accuracy. As can be seen, a measurement sample having an associated non-vanishing probability distribution in an area of a weighted cell1210adds to the weight coefficient of said cell. If j measurement samples contribute to the weight wiof cell i, the weight coefficient of cell i can be determined according to wi=Σjwj,i. One all measurement samples21,22have been mapped to weighted cells, the weighted cells can be treated as a lower resolution version of the measurement samples and be used as a baseline for a least squares fit similar to equation (1). That is to say, one ore more lines are fitted to the positions of weighted cells instead of the original measurement samples. This means that act1020can comprise detecting, based on the weighted cells, one or more edges of the object130, the one or more edges corresponding to the one or more lines23,24. The cells have a varying weights since more or less detections are in close proximity. These weights have to be taken into account when fitting the shape. In some example implementations, the line parameters of the lines23,24can be computed based on a weighted constraint least square algorithm taking into account the weight coefficients of the one or more weighted cells. The corresponding linear equation system can be Wr=W Ap.(8) The linear equation system (8) is based on a system matrix A including the coordinates of the centers of the weighted cells. In the illustrated example the coordinates are Cartesian coordinates in the form (x, y), while regarding further possible examples, the coordinates may relate to different coordinate systems (e.g. polar coordinates). The vector r denotes deviations (distances) of the weighted cells from the (fitted) lines. Again, the line parameters defining the lines are included in vector p. W denotes a matrix of the respective cell weights according to: W=[W000W1]⁢Wj=[wj,i=1000⋱000wj,i=Nj] wj,idenotes the weight coefficient of the i-th weighted cell of line j. A first and a second line parameter n1, n2defining an orientation of the one or more lines can be determined based on a computation of an eigenvector of a normalized system matrix, wherein the normalized system matrix comprises normalized coordinates of the weighted cells, wherein the normalized coordinates are normalized to mean coordinates of the one or more lines. Expressed mathematically, the first and a second line parameter n1, n2can be computed according to [n1, n2]T=eigenvector of (AnormTWAnorm) (with regard to the lower eigenvalue of A) wherein Anormcan be calculated according to: An⁢o⁢r⁢m=A-I[x˜1y˜1x˜2y˜2] with I=[10⋮⋮1001⋮⋮01] and {tilde over (x)}j=Σiwj,ixj,iand {tilde over (y)}j=Σiwj,iyj,i, xj,iand yj,idenoting the coordinates of the center of the i-th weighted cell of line j. Further, a line parameter defining the position of line j can be determined according to: cj=Wj⁢Aj∑wj,i[n1⁢n2]T For the example scenario ofFIG.11(two orthogonal lines), the system covariance matrix Σsyscan be computed according to: ∑Sys=r⁢W⁢rT2⁢∑wj,i⁢(AT⁢W⁢A)-1,(12) The skilled person will appreciate that in the illustrated example with the two orthogonal lines23,24the covariance matrix Σsysis a 4×4 matrix having the respective variances of the four line parameters n1, n2, c1, c2on its main diagonal and covariances between different line parameters on its minor diagonals. The variance of a random variable can be regarded as a measure of the variability of the random variable. Likewise, the covariance is a measure of the joint variability of two random variables. The system covariance matrix Σsyscan be regarded as a measure of the uncertainty of the of the current measurement of lines23,24. In this sense it can be used similar to R in above equation (3) for updating the weight coefficient of a Kalman filter while tracking lines/edges23,24, for example. Based on the system covariance matrix Σsys, the variance of an orientation, the variance of any point on one of the fitted lines or a covariance matrix of the intersection point can be computed similar to the above calculations. Such uncertainties can then again be used to adapt a tracking filter for a precise tracking result as mentioned before. However while the spatial resolution of the weighted grid map is lower than the spatial resolution of the one or more location sensors, the uncertainty of the fitting process should be equal to a fitting process based on fitting lines directly to the measurement samples. FIG.13provides an overview of the previously described concept of using grid maps. At1301measurement samples of one scan from one or more sensors are received. At1302, the measurement samples are mapped to grid map as described before. At1303, clusters of cells can be assigned to different objects130using known algorithms, such as density based clustering (DBSCAN) or K-Means. At1304, cells of a cluster can be assigned to different edges of the corresponding object. This can be done by using known algorithms, such as principal component analysis (PCA) or random sample consensus (RANSAC) algorithm. At1306, the object130can be characterized by fitting two lines to the weighted cells of the respective edges. The fitting process1306can be, for example, a weighted constraint least square fit setting up the system matrix A from the weighted cells and solving the linear equation system (8). The eigenvector corresponding to the lowest eigenvalue of (AnormTWAnorm) corresponds to the vector of line parameters [n1, n2]T. At1307, an orientation ϕ of the lines23,24and an intersection point (xi, yi) (reference point) of the two lines23,24can be determined. At1308, the system covariance matrix Σsyscan be determined according to equation (12). Embodiments of the present disclosure propose to additionally determine the respective covariance σΦ2, Σias respective uncertainty of the measurements ϕ and (xi, yi). The measured orientation ϕ of the object together with its related covariance σΦ2and the measured reference point (xi, yi) together with its related covariance Σican be used for tracking the respective object130. Note that the proposed concept is not limited to L-Shapes, but also I-Shape (Line), U-Shape (open rectangle) or Rectangle can be fitted. This is straight-forward since the linear equation system is equivalent, only A and p matrix is varied. The sensors are not limited to laser scanners, but could be as well high resolution radar, camera detections, ultra sonic etc. or a mixture of multiple sensors registered in the same measurement grid. As an approximation instead of a measurement grid a standard occupancy grid map can be used, which is typically calculated using sensor models. The probability density function does not need to be evaluated for each measurement sample/cell combination, but can be based on a look-up-table (patch) increasing close-by cell weights. In theory each measurement sample influences an infinite number of cells, since a normal or Laplace distribution is not bounded. Practically after some standard deviations the values are really low. Practically e.g. only the likelihood of cells in 1σ-2σ range may be changed for each detection. Since e.g. the laser scanner has for a detection in a certain distance always the same measurement uncertainty, the increasing value can be calculated before and stored in a look-up-table. If sensor detections does not always emerge from the edge of a vehicle or outliers are present (close-by-detections not from the object), the approach can be extended by a RANSAC algorithm to exclude these cells from fitting. From the linear equations of the fitted lines including the determined line parameters one can compute an intersection point1107. Defining and/or tracking the position and/or motion of an intersection point can be useful to estimate a position, velocity or rotation of a detected object. Instead of using, for example a measurement point, to determine a position or velocity of an object, using the intersection point of two fitted lines as a reference point should be more reasonable. While measurement points are not fixed to a specified point on the object's surface, the intersection point defines approximately the corner of an object. Thus the intersection point is also called reference point. In some example of a vehicle of the present disclosure it may be useful to determine such a reference point, for example to compute a maneuver to pass another car on a highway. However this may also be useful to initiate emergency braking in case of an emergency situation. Further the present disclosure comprises determining an orientation of an object. The said orientation can directly be calculated from the first line parameters n1and n2for a present example of fitting two orthogonal lines according to: Φ=atan⁡(n1n2) However for further implementations of the present disclosure an orientation may also be defined for one or more lines, which can also be non-orthogonal. As mentioned above, for tracking purposes an estimation of the uncertainty of the fitting process can be useful, for example to adapt a Kalman filter. However this is also possible using a grid map of weighted cells and the corresponding fitted lines. Similar to a fitting process without using grid maps, calculating uncertainties of a fitting process can be based on a system covariance matrix including covariances and variances of the line parameters1108. Additionally to a vector of deviations r and a system matrix A, one may also provide the matrix W including the cumulated cell weights wj,ito calculate a system covariance matrix according to: ∑Sys=r⁢W⁢rT2⁢∑wj,i⁢(AT⁢W⁢A)-1 wherein r denotes a vector including the deviations of cell coordinates of weighted cells to the fitted lines and A denotes a system matrix of cell coordinates as mentioned before. Based on the calculated system covariance matrix, the variance of an orientation, the variance of any point on one of the fitted lines or a covariance matrix of the intersection point can be computed similar to the above calculations. Such uncertainties can then again be used to adapt a tracking filter for a precise tracking result as mentioned before. However while the spatial resolution of the weighted grid map is lower than the spatial resolution of the one or more location sensors, the uncertainty of the fitting process should be equal to a fitting process based on fitting lines directly to the measurement samples. Due to the similar uncertainty of the fitting process and a reduced effort of fitting lines to quantized measurement samples, the method of the present disclosure provides an efficient characterization of surrounding objects. Thus the quantization of measurement samples of location sensors with high resolution can be used for tracking purposes. Further for tracking objects tracking filter, such as a Kalman filter, can be used. Then again a tracking filter can be adapted by using a system covariance matrix calculated simultaneously from the weighted grid map. The aspects and features mentioned and described together with one or more of the previously detailed examples and figures, may as well be combined with one or more of the other examples in order to replace a like feature of the other example or in order to additionally introduce the feature to the other example. Examples may further be or relate to a computer program having a program code for performing one or more of the above methods, when the computer program is executed on a computer or processor. Steps, operations or processes of various above-described methods may be performed by programmed computers or processors. Examples may also cover program storage devices such as digital data storage media, which are machine, processor or computer readable and encode machine-executable, processor-executable or computer-executable programs of instructions. The instructions perform or cause performing some or all of the acts of the above-described methods. The program storage devices may comprise or be, for instance, digital memories, magnetic storage media such as magnetic disks and magnetic tapes, hard drives, or optically readable digital data storage media. Further examples may also cover computers, processors or control units programmed to perform the acts of the above-described methods or (field) programmable logic arrays ((F)PLAs) or (field) programmable gate arrays ((F)PGAs), programmed to perform the acts of the above-described methods. The description and drawings merely illustrate the principles of the disclosure. Furthermore, all examples recited herein are principally intended expressly to be only for illustrative purposes to aid the reader in understanding the principles of the disclosure and the concepts contributed by the inventor(s) to furthering the art. All statements herein reciting principles, aspects, and examples of the disclosure, as well as specific examples thereof, are intended to encompass equivalents thereof. A functional block denoted as “means for . . . ” performing a certain function may refer to a circuit that is configured to perform a certain function. Hence, a “means for s.th.” may be implemented as a “means configured to or suited for s.th.”, such as a device or a circuit configured to or suited for the respective task. Functions of various elements shown in the figures, including any functional blocks labeled as “means”, “means for providing a signal”, “means for generating a signal.”, etc., may be implemented in the form of dedicated hardware, such as “a signal provider”, “a signal processing unit”, “a processor”, “a controller”, etc. as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which or all of which may be shared. However, the term “processor” or “controller” is by far not limited to hardware exclusively capable of executing software, but may include digital signal processor (DSP) hardware, network processor, application specific integrated circuit (ASIC), field programmable gate array (FPGA), read only memory (ROM) for storing software, random access memory (RAM), and nonvolatile storage. Other hardware, conventional and/or custom, may also be included. A block diagram may, for instance, illustrate a high-level circuit diagram implementing the principles of the disclosure. Similarly, a flow chart, a flow diagram, a state transition diagram, a pseudo code, and the like may represent various processes, operations or steps, which may, for instance, be substantially represented in computer readable medium and so executed by a computer or processor, whether or not such computer or processor is explicitly shown. Methods disclosed in the specification or in the claims may be implemented by a device having means for performing each of the respective acts of these methods. It is to be understood that the disclosure of multiple acts, processes, operations, steps or functions disclosed in the specification or claims may not be construed as to be within the specific order, unless explicitly or implicitly stated otherwise, for instance for technical reasons. Therefore, the disclosure of multiple acts or functions will not limit these to a particular order unless such acts or functions are not interchangeable for technical reasons. Furthermore, in some examples a single act, function, process, operation or step may include or may be broken into multiple sub-acts, -functions, -processes, -operations or -steps, respectively. Such sub acts may be included and part of the disclosure of this single act unless explicitly excluded. Furthermore, the following claims are hereby incorporated into the detailed description, where each claim may stand on its own as a separate example. While each claim may stand on its own as a separate example, it is to be noted that—although a dependent claim may refer in the claims to a specific combination with one or more other claims—other examples may also include a combination of the dependent claim with the subject matter of each other dependent or independent claim. Such combinations are explicitly proposed herein unless it is stated that a specific combination is not intended. Furthermore, it is intended to include also features of a claim to any other independent claim even if this claim is not directly made dependent to the independent claim.
49,278
11860303
DETAILED DESCRIPTION As illustrated inFIG.1, some embodiments of the present technology may implement a sensing or detection apparatus100, such as one configured with particular processing methodologies, useful for detecting particular motions of a user or a patient (the patient may be identical or a different person from the user of the detection apparatus100) in the vicinity of the apparatus. The sensor may be a standalone sensor or may be coupled with other apparatus, such as a respiratory treatment apparatus or sleep assessment apparatus. For example, it may optionally provide an automated treatment response based on an analysis of the gestures or motion detected by the sensor of the apparatus. For example, a respiratory treatment apparatus with a controller and a flow generator may be configured with such a sensor and may be configured to adjust a pressure treatment generated at a patient interface (e.g., mask) in response to particular motions or gestures detected by the sensor. The respiratory treatment apparatus may be for example, a respiratory therapy or PAP apparatus, such as any one described in International Patent Application Publication No. WO 2013/152403, the entire disclosure of which is incorporated herein by reference. In general, such motions or gestures may be understood to be any that are intentionally or subconsciously made by a person rather than those physiological characteristics that are involuntarily periodic in nature, (i.e., chest movement due to respiration or cardiac activity.) In this regard, movement signals sensed by a sensor that are generated by particular human gestures may be processed to identify or characterize the particular movement or gesture. For example, a hand movement, or particular hand movement, could be detected. Larger movements such as the movement made by a person turning over in bed (a turnover) can also be recognized. Particularized detection of such movement events may then permit them to be counted or serve as a control for an apparatus (e.g., implemented to turn on or off a system, or to provide other control signals). The technology may also be implemented to classify physiological movement such as sway, breathing, and faster motion such as shaving or scratching. It could be implemented to improve the robustness of breathing rate detection when a subject is standing or sitting, such as by identifying and eliminating such sway and gesture motion for respiratory rate detection. The technology may be even be implemented to monitor subjects with persistent itch, irritation or discomfort, e.g., in a clinical trial of a dermatological cream for quantification of such itch related or discomfort related motion. In some cases, it could even be implemented to assess the efficacy of consumer products dependent on motion such as a shaving blade or cream/gel, and understand shaving motions, etc. A sensor with suitable processing circuit(s) (e.g., one or more processors) may be configured as a gesture detection apparatus that may be implemented as a component (e.g., a control component) for many different types of apparatus. For example, a television or television receiver may include such a sensor for controlling the operations of the television or television receiver with different gestures (e.g., on/off, volume changes, channel changes etc.). Similarly, the gesture detection apparatus may be configured as part of a user interface for a gaming apparatus or computer, such as to control operations of the game or computer. Such a gesture detection apparatus may be implemented for many other apparatus that employ a user interface such that the user interface may be implemented as a gesture-controlled user interface. For example, a processor or controller may evaluate signals from one or more sensors to identify in the processor or controller a particular movement or gesture, and in response, activate generation of a visual (or audio) change to a displayed user interface (such as one displayed on a display device such as an LCD or LED screen). The identified gesture or the activated change may be used to issue one or more control signals to control a device (e.g., a computer, television, computer game console, user appliance, automated machine, robot, etc., that is coupled to, or communicates, with the processor or controller. A typical sensor, such as a radar sensor, of such an apparatus may employ a transmitter to emit radio frequency waves, such as radio frequency pulses for range gated sensing. A receiver, which may optionally be included in a combined device with the transmitter, may be configured to receive and process reflected versions of the waves. Signal processing may be employed, such as with a processor of the apparatus that activates the sensor, for gesture or motion recognition based on the received reflected signals. For example, as illustrated inFIG.2, the transmitter transmits a radio-frequency signal towards a subject, e.g., a human. Generally, the source of the RF signal is a local oscillator (LO). The reflected signal is then received, amplified and mixed with a portion of the original signal, and the output of this mixer may then be filtered. In some cases, the received/reflected signal may be demodulated by the transmitted signal, or the phase or time difference between them may be determined, for example, as described in US-2014-0163343-A1, the entire disclosure of which is incorporated herein by reference. The resulting signal may contain information about the movement (e.g., gestures), respiration and cardiac activity of the person, and is referred to as the raw motion sensor signal. In some cases, the signal may be processed to exclude involuntary periodic activity (e.g., respiration and/or cardiac activity) so that movement information in the signal may be classified for its particular gesture or movement type. In some cases, the sensor may be a sensor described in U.S. Patent Application Publication No. 2014/0024917, the entire disclosure of which is incorporated herein by reference. The sensor may include various motion channels for processing of detected signals, for example, such a sensor may be implemented with a gesture processor to provide a gesture channel output signal. This may be distinct from a movement processor that provides a movement channel output signal. Having multiple processors can permit output of signals with different characteristics (e.g., different bandwidths, different sampling rates, etc.) for different motion evaluations. For example, there may be more information in a gesture signal rather than a breathing or cardiac signal. For example, the gesture signal can include information representing detection of a wider range of motion speeds. For example, a 1 metre per second movement might cause a 70 Hz baseband signal in a 10.525 GHz receiver. A typical sensing scenario might be able to detect speeds of between 1 mm/s to 5 m/s. For gesture detection, frequencies greater than 10 Hz (1 cm/s up to 5 m/sec) may be evaluated. For breathing, detection may involve evaluation of frequencies corresponding to velocities in range of 1 mm/sec to approximately 1 m/s. Thus, a movement processor may generate a signal focused on slower movements, and a gesture processor may generate a signal with a much wider band that may include both slow movements as well as faster movements. Thus, the sensor may implement analog and/or digital circuit components, for signal processing of the received sensor signal. This may optionally be implemented, at least in part, in one or more digital signal processors or other application specific integrated chips. Thus, as illustrated inFIG.3, the sensor may be implemented with the gesture processor to implement a particular transfer function (Hg), as well as an additional movement processor to implement a particular transfer function (Hm), either of which may be considered a motion processor or channel circuit for producing motion output signals. For example, in some cases, the sensor may have a gesture channel that provides quadrature output signals (I and Q) whose amplitude, frequency and phase is given by: VI(x,t)=Hg(jω)A(x)Sin(4πx(t)/λ+ϕ) VQ(x,t)=Hg(jω)A(x)Sin(4πx(t)/λ+ϕφ+π/2)Where:Hg(jω) is the transfer function of the sensor gesture channel such as in a baseband circuit or baseband processor;A(x) is the demodulated received signal strength and hence dependent on target radar cross section (size) and target distance (x);x(t) is the displacement of the target with timeλ is the wavelength of the RF signal (e.g., a wavelength in free space corresponding to a 10.525 GHz frequency signal (e.g., a wavelength of 28.5 mm); andJω—Is the frequency response of the system where ω is the angular velocity and j is the complex number (0+√−1), which provides the phase information). The gesture channel will have a frequency response to movement. For an in-band movement signal with a linear velocity v which moves a distance dx from position x0 to position x1 towards or away from the sensor in a time interval dt starting at t0 and ending at t1 the gesture channel output signal frequency f is given by 2πf(t1−t0)=4π(x1−x0)/λ 2πf dt=4πdx/πFor a 10.525 GHz, 28.5 mm λ sensorf˜70.17 v . . . where f (Hz) and v(m/s)) Here, taking λ into account, the units match on both sides such that wf (1/s), v (m/s) and 2/λ=70.175 (m{circumflex over ( )}−1). The constant value of 70 is actually 1/(2λ) and has the dimension of m−1. In general: f(t)=2v(t)λ Typically, the amplitude of the output signal at any particular frequency will depend on the gesture channel transfer function frequency response. The gesture channel will also have a phase response to movement of the target (e.g., a person's hand etc.). The phase difference between the I and Q channels is 90 degrees. As a result the Lissajous curve for the I and Q signal is a circle, as shown inFIG.4. The frequency (cycle time) is determined by the target speed. The amplitude is determined by the target distance, target cross section and by the gesture channel transfer function. The direction of the phasor, clockwise or anti-clockwise, is dependent on the direction of the motion towards or away from the sensor. The gesture channel or another, general movement dedicated, channel may also have an amplitude response to non-gesture related movement. The amplitude of its I and Q corresponding channels is determined by the target distance, target cross section and by the movement channel transfer function. By way of example, a logarithmic plot of the movement channel signal amplitude versus target distance for a fixed target and in band target speed is as shown inFIG.5A. FIG.5Bcompares the magnitude response of two channels (movement channel and gesture channel) in response to a specific movement over distance of a different sensor from that ofFIG.5A. The gesture channel has a similar characteristic to the movement channel.FIG.5Billustrates channel amplitude response of a version of the sensor, such as with different antialiasing filtering compared to that ofFIG.5A. Because of the radar equation and associated antenna gain transfer function, as well as a non-linear scattering of the reflected signal, the receive signal level declines as function of the distance. (e.g., 1/xn, 1.5<n<3). Accordingly, by processing of the gesture output signal(s) from the gesture channel and/or the movement signals from the movement channel (which may or may not be the same as the gesture channel), particular gestures or movements may be detected in one or more processors. This may be accomplished by calculating features from the signal(s) and comparing the features and/or changes in the features to one or more thresholds, or identifying patterns in the signal(s). Such features of the signals may be, for example, statistical values of parameters associated with the signal(s), such as average or the median values of the signal(s) phase, amplitude and/or frequency, standard deviation of any of these values etc. Suitable features may be determined by training of a classifier. Classification of calculated features may then serve as a basis for gesture detection with the trained classifier. For example, one or more processors may evaluate any one or more of the gesture signal(s) phase, amplitude and/or frequency characteristics to detect patterns or other indicia in the signal associated with a particular gesture or movement. In some cases, the characteristics may include amplitude cadence (e.g., amplitude and sidebands) and a time during which the gesture persists. In this regard, analysis of the signal(s) will permit identification of signal characteristics that are produced with respect to certain motions (e.g., towards or away from) in relation to the sensor since different motions may produce differently defined amplitude, frequency and/or phase characteristics. Such an analysis may include choosing a pattern for a particular gesture so as to distinguish between several gestures (e.g., select one from a group of different predetermined trained gestures.) In some cases, the system may also process feedback from the user based on a perceived correct or incorrect detection of a movement or gesture signal. The system may optionally update its classification based on this input, and may optionally prompt the user to perform one or more repetitions of a specific gesture in order to optimize performance/recognition. In this manner, the system may be configured to adapt (personalise) to the gestures of a particular user, and identify and separate (distinguish) the gestures of different users. In this regard, fast or slow and/or long or short hand gestures towards or away from the sensor can produce clearly detectable signals. Motion across the sensor produces a motion component that is also towards and away from the sensor, but this motion component is small. Therefore motion across the sensor produces distinguishing characteristics but at smaller amplitude, lower frequency and a center line based phase change. Motion towards the sensor always has a specific phase rotation which is reversed when the motion is away from the sensor. Phase can therefore provide gesture directional information. A frequency spectrogram may clearly show the characteristic motion velocity for particular gestures and may be identified by processing features of the spectrogram. The amplitude characteristic may require signal conditioning before use, as the amplitude is seen to vary with position (distance from the sensor) as well as target cross section/size. It is possible to extract the radial velocity and direction of a target. Within the sensor range (e.g. 1.8-2 m), it might be a small target near in or a larger target further away. Thus, a processor of the any one or more of velocity, change in velocity, distance, change in distance, direction, change in direction, etc., extracted from the gesture channel may also serve as characteristics for detection of particular gestures. In general, the frequency and amplitude of the signals output from the gesture and movement channels are dependent on the baseband circuit amplification and filtering. In one version, the circuit implementing the gesture/movement transfer function may be constructed with a band pass filtered amplifier with a gain, (e.g., 9.5) and with a frequency BW (bandwidth) (e.g., approximately 160 Hz) in a desired range (e.g., approximately 0.86 Hz to 161 Hz). Such an example is illustrated in the transfer function simulation graph ofFIG.6A. This may optionally be implemented with both low pass and high pass filters. In some versions, the gesture channel may include an antialiasing filter. The gesture channel frequency characteristics may include greater or lesser antialiasing filtering. As shown in this particular example, there is less than 10% drop in signal level (6.7 to 6.1 drop in gain) at the band edge of 160 Hz. In some cases, the antialiasing filtering may be implemented by the band pass filter described in the above paragraph. In some cases, a processor may calculate time difference or phase difference between the emitted and the received signals of the sensor and identify particular motions/gestures based on the calculated time difference and/or phase difference. In the following, example gesture detection is described in reference to certain gestures/motions, such as hand and/or arm movement that may be trained in a system, such as for detection with a classifier executed by a processor. Other gestures may also be trained. For example, in some versions, a group of processing methodologies (e.g., algorithm processing steps) and associated digital signal processing may be implemented for determining physiological repetitive and/or varying motion, including that caused by the movement of chest due to respiration, sway detection and cancellation, and gross and fine movement detection (gesture detection) due to a multitude of actions such movement of the hands and arms, shaving (e.g., of the face) or scratching (e.g., due to physical irritation or discomfort). The key input features to such a system are derived from any one or more of amplitude (temporal), frequency and phase characteristics of the detected signal. In essence, the processing applied allows the unravelling of the direction change information from the in phase (I) and quadrature phase (Q) signals in the presence of significant noise and confounding components (due to the sensor's inherent noise, sensor signal “fold-over” (dependent on frequency), sensor phase imbalance (if present), different type of physiological movement, and other motion sources and background clutter). The processed channel signals (in phase and quadrature) may be recorded by a radio frequency RADAR and may be digitised using a suitable ADC module. These RF signals can be continuous wave, pulsed (e.g., applied to 10.525 GHz sensor or others) or pulsed continuous wave. The signals may be fed or input into a filter bank, where a series of digital filters including bandpass filtering are applied to detect and remove low frequency sway information. The phase information in the two channels may be compared to produce a clockwise/anti-clockwise pattern. Hysteresis and glitch detection may be applied to suppress signal fold-over, and the resulting signal represents the relative direction of the movement source to the sensor frame of reference. Peak/trough detection and signal following may be additionally implemented to aid this processing. Therefore, the system can determine if a movement is directed towards or away from the sensor, and if changing direction. The analog filtering on the sensor can be modified to widen the bandwidth prior to sampling in some versions. Example Gestures/Movements Gesture A: Detectable Gesture A may be considered in reference toFIGS.7-10. In example A, the gesture is based on hand movement, such as when a person sits approximately 70 cm in front of the sensor. The movement begins with the palm of the hand (face up or forward facing the sensor) approximately 40 cm from the sensor. This was the furthest point during the gross motion. The closest point may be approximately 15 cm. The hand is extended (moves) towards the sensor (taking 1 second) and after a brief pause is pulled back (taking 1 second). The movement may be considered as a replication of a sine wave. The complete gesture takes approximately 2 seconds. Sensor recordings from the gesture channel are shown with respect to repetition of the single gesture (10 times inFIG.8and a single time inFIG.9).FIG.8illustrates a plot of changing amplitude verses time, frequency (spectrogram) and changing phase data with respect to time of the sensor recordings from the gesture channel. The phase direction may be plotted with respect to time by applying the I and Q signal outputs to different axis as illustrated inFIGS.8,9and10. FIGS.8-10show in reference to the gesture A that motion towards the sensor has a specific phase rotation which is reversed when the motion is away from the sensor. Thus, analysis of this phase can provide gesture directional information. The frequency spectrogram clearly shows the characteristic motion velocity for the gesture. This frequency “chirp” has a distinct personality (i.e., can be classified in a processor).FIG.9depicts a close-up view of the motion/gesture outlined inFIG.7.FIG.8depicts multiple instances of this gesture; the time domain signal amplitude is shown, as well as a spectrogram, and a phase plot. The spectrogram indicates time on the x-axis, frequency on the y-axis, and intensity at a particular time for a particular frequency as a different colour. In this example, the subject sat approximately 70 cm in front of the sensor. The movement begins with the palm of the hand (face up) 40 cm from the sensor, the furthest point during the gross motion. The closest point was 15 cm. The hand is extended towards the sensor (1 second) and after a brief pause is pulled back (1 second). The intention was to replicate a sine wave. The complete gesture took 2 seconds. The complete gesture was repeated 10 times (as perFIG.8).FIG.9indicates where the movement is towards the sensor, close to the sensor, then moving away from the sensor. For this case, the maximum frequency is seen to range from 90-100 Hz. The phase is seen to move clockwise during motion towards the sensor, and anti-clockwise when moving away. InFIG.10, the I and Q (in phase and quadrature) channels were plotted against time on a 3D figure using MATLAB (The Mathworks, Natick) as the second method of analysis for phase direction. The amplitude characteristic may employ signal conditioning before use, as the amplitude is seen to vary with position (distance from the sensor) as well as target cross section/size. The radial velocity and direction of a target may also be extracted. Within the sensor range (e.g., 2 m), it (the target) might be a small target near in or a larger target further away. Gesture B Another detectable gesture B (arm and hand) may be considered in reference toFIGS.11-12. Movement begins with the arm fully extended. As shown inFIG.11, a hand is then swung completely across the body. The palm naturally changes from face up to face down as the arm is moved from close to the sensor (5 cm) to furthest away from the sensor (135 cm). At the midway point of the gesture (at the peak of the arm swing arc over the head) the palm direction will change. The complete gesture takes less than 4 seconds and may, for example, be performed in a sitting position. The gesture B with an approximately 2 m/s velocity produces a frequency of 140 Hz. This occurs within a 1 m distance over a 1 second period with a start and end velocity of 0 m/s. The sensor may be positioned approximately near the person for detection (e.g., 95 cm from the center of the chest). For example, a furthest point during the gross motion of the gesture may be about 135 cm from the sensor and the closest point may be about 5 cm. Such closest and furthest points may be considered in reference to a measurement from the finger tips.FIG.12illustrates the amplitude, frequency and phase characteristics that may be processed for detection of the gesture. Shaving Motions The system may be applied for many types of activity, preferably associated with repeating motions. Examples can include detecting and classifying activities such as rinsing, combing, brushing (e.g., hair or teeth) or shaving strokes, etc. In some cases, the system may assume that the primary motions recorded contain a particular activity (e.g., shaving information and/or rinsing). Analysis of the gesture channel can permit, for example, estimating total number of strokes, detecting the change in direction of the motion may be determined. Similarly, relative direction of stroke—up/down or down/other, etc. may be determined. The relative direction of the motion source may be detected. Rate of stroke may be determined. By detecting a likely stroke event, it is possible to calculate and provide an estimate of the rate in strokes per minute. Peak high rate events are marked as possible rinse events. In some versions of the system an activity type or gesture processor may implement any one or more of the following Processing steps:Calculate spectral content of gesture signal(s)apply Fast Fourier transform and find peak (frequency domain) in a rolling windowCalculate the distance between each sinusoidal-like peakCalculate zero crossings of signal (time domain)Estimate relative direction of movement and durationExtract phase shift between the two channels. Alternative time frequency analysis such as short time Fourier transform or wavelets may also be implemented. In general, the complex sensor signal is based on arm movement, head movement, torso movement etc. Other movements may also be detected. In some versions, the clockwise/anti-clockwise direction change information may be clocked to produce an impulse to represent a change in direction. These pulses may be implemented for a counter, and grouped into different rates of occurrence.FIG.13illustrates the change in direction for detection as I/Q phase signal difference varies. Therefore, typical rates consistent with the act of shaving can be noted, and thus the period of shaving deduced. An increase in rate associated with excess high frequency information can be inferred as the arm moving to the face, or the rinsing of a razor blade. An advantage of using an RF sensor for detecting of shaving or other motions and gestures is the enhanced privacy versus say a video based system that captures or processes pictures/video from a user or group of users. A reduction in rate and direction change can be used to detect breathing. In addition, time domain and frequency domain processing is applied to the signals to localize specific bands. Breathing can be further separated from confounding human body sway by detecting a relative change in rate with an unexpected direction change behaviour characteristic.FIG.13illustrates change in direction detection as I/Q phase difference varies. InFIG.13, the IQ plot in the upper left panel represents a trace moving in a clockwise direction, and upper left showing a trace moving in an anti-clockwise direction. A change from a clockwise to anti-clockwise direction (or vice versa) gives rise to the direction change trace shown in the lower left and right panels by the top line therein. The middle and bottom lines represent either the I or Q channels respectively in this example. In one example version, strokes of the activity, (e.g., shaving strokes) may be counted with application of processing that includes the followingBand pass filteringCalculating the change of state from clockwise/anticlockwise.Applying hysteresis (avoid flipping state on small blips in signal, e.g., foldover).Suppressing feature update around the zero pointDifferentiating the resulting signal.Counting the number of transitions (e.g., to identify a return stroke). A signal graph illustrating such detection processing is shown inFIG.14. In executing such gesture/activity detecting training, classification may be performed in the following manner as illustrated inFIG.15. One set of recordings from the sensor may be accessed in a read step1502, may be used as a training set. Suitable detection features (with phase, frequency and amplitude) may be produced, such as in a feature generation step1504. In a training setup step1506, a classifier configuration may be created for particular motions/gestures. The features may then be processed in a training classify step1508to relate a motion to the most relevant of the calculated features. The training classifying may be repeated if further tuning is desired at check step1512, such as if improved classification training is desired. In a pre-testing setup step1505, a classifier configuration may be accessed for evaluating features of previously classified motions/gestures. These pre-classified motions may then be compared with newly generated features in a classification step1507to identify one of the pre-classified motions based on the features. Optionally, the performance of the classifier from training or testing may be assessed in video performance step1510,1509using the identified features to compare with video based annotations (i.e., where a simultaneous video is recorded during performance of known gestures to act as a timestamp reference for later annotation of the motion signals; this requires human scoring of the signals and/or a separate log of motion/gesture events to be performed) and, based on the result of the comparison, the features may need to be fine-tuned. An independent test set may then be used to test the resulting configuration of features. For this type of supervised learning (unsupervised learning is also possible using other techniques), an independent test set is held back from the training set in order to check the likely real world performance of a system (i.e., the performance on unknown data). During the development process, iteration is carried out on the training set in order to maximise performance, and aims to use the minimum number of features that maximise performance where possible. Principal Component Analysis (PCA) or other dimensionality reduction may be implemented in order to select such features. It will be recognized that steps1502,1504,1505and1507may be implemented by a processor or controller, in or associated with a detection device100, for the purposes of making motion identification as previously described, when not implementing training and testing. For example, a Kolgomorov-Smirnov (KS) goodness-of-fit hypothesis statistical test may be implemented to compare the cumulative distribution function of the target block of data to the training data. Such a block by block classification is illustrated in the example ofFIG.16. It may be implemented with any one or more of the following processes: (1) Biomotion Block Division The I and Q signal data can be split up into either continuous non-overlapping or partially overlapping blocks. For example, a block length of 1*160 samples (1 seconds at 160 Hz) with a 50% overlap could be used or some other combination. Computational complexity can be traded for precision by varying the block length and/or by varying the amount of overlap. (2) Block Pre-Processing The block of data may be checked to see if the data falls within a presence or absence section (i.e., is there a user with a breathing rate and/or heartbeat within range of the sensor or sensors; for breathing rate detection, between 15 and 60 seconds plus of data may be required to detect multiple breathing cycles). Furthermore, the block may be checked to see that no possible RF interference signals are detected (i.e., to separate motion/gesture signals from strong sources of RF interference that might be detected by an RF transceiver; also, other non biomotion sources such as fans may be detected and rejected at this stage). If the block under consideration does not meet these criteria, it may optionally not be classified further. The block or blocks may also be cross referenced and/or correlated with other information sources of the user or the room environment, in order to check the likelihood of a user actually being in the vicinity of the sensor; for example, data from a wearable device, location or motion data from a cell phone, room environmental sensors, home automation or other security sensors. (3) Feature Extraction For the block under consideration, a number (either all or a subset) of time-domain (temporal) and frequency domain or time/frequency features may be calculated as follows. It is noted that different block lengths may be considered simultaneously.transformed trimmed mean and median (said transformation being for example, but not limited to, the square root, squared or log) of the I & Q signals (or of derived features)transformed spread in the signals (said transformation being for example, but not limited to, the square root, squared or log) calculated using interpolation or otherwise, covering a defined range (for example, but not limited to, the range from 5% to 95% or of interquartile range).The envelope of the signal (I & Q) using a Hilbert transformThe relative amplitude of the signal (I&Q) to surrounding examples of the signalThe zero crossings of the signal (I & Q)The peak frequency in a moving windowThe ratios of peak frequency to second and third harmonicsThe phase direction (clockwise or anticlockwise)The phase velocityThe existence (or lack thereof) of a breathing and/or cardiac signal in the signal (i.e., relating the motion to a biomotion, e.g., that motion made by a person)The presence of a similar or difference in motion signal in I & Q channels (4) Block Classification As an example, for an input feature set with a characteristic distribution, the Kolgomorov Smirnov (KS) two sample non parametric goodness of fit test may be used to compare this reference sample (e.g., features of a shaving motion, particular hand gesture derived from time, frequency, phase etc.) to a new sample distribution that has been captured by the sensor(s) (e.g., quantifying a distance between the empirical distribution function of the new sample detected and the cumulative distribution function of the reference distribution). A multivariate version of the KS may also be implemented, although this may require multiple cumulate density function comparisons to be made. As another example, a linear discriminant classifier (LDC), based on Fisher's linear discriminant rule, is applied to each non-overlapped or overlapped block. For each block of data fed in, there are multiple predetermined output classes—e.g., different motion or gesture states. The classifier outputs a set of numbers representing the probability estimate of each class, in response to a set of input features. Linear discriminants partition the feature space into different classes using a set of hyper-planes. Optimisation of the model is achieved through direct calculation and is extremely fast relative to other models such as neural networks. The training of a LDC proceeds as follows. Let x be a d×1 column vector containing feature values calculated from a data set. We wish to assign x to one of c possible classes (c=2 in our case). A total of N feature vectors are available for training the classifier, with the number of feature vectors representing class k equal to Nk., i.e.: N=∑kNk(1) The nthtraining vector in class k is denoted as xk,n. The class-conditional mean vectors μkare defined as: μk=1Nk⁢∑n=1Nkxk,n(2) We now define a common covariance matrix defined over all classes (i.e., we assume that each class only differs in its mean value, and not in its higher order statistics). The common covariance matrix is defined as: ∑=1N-c⁢∑k=1c∑n=1Nk(xk,n-μk)⁢(xk,n-μk)T(3) The μk's and Σ are calculated using training data. Once these values have been calculated, a discriminant value ykfor an arbitrary data vector x can be calculated using: yk=-12⁢μkT⁢∑-1μk+μkT⁢∑-1x+log⁡(πk)(4) Where πkis the a priori probability of the vector x being from class k. It is easy to convert the discriminant values to posterior probabilities using: p⁡(k|x)=exp⁡(yk)∑k=1cexp⁡(yk)(5) This formulation provides a mapping from discriminant value to posterior probabilities. The final class assigned to x is the class with the highest posterior probability. This becomes the block output. However, the system can also employ methods such as neural networks, deep learning analysis etc.—especially where reasonable computing power is available. More complex methods including morphological signal processing (e.g., such as may be used in image processing) can augment feature analysis when using more complex classification methods; these may be more appropriate for detecting patterns seen in complex motions/gestures. The periodic nature of the activity is further illustrated in the signal graph ofFIG.17, showing the I channel, the Q channel, the stroke and stroke rate for the activity. In this example assuming a shaving activity, the fourth (lowest) axis depicts a probable razor rinse period with black dots (labelled “DD” inFIG.17in the lowest panel labelled “stroke rate”—the high rate areas indicate these rinse points). This clearly illustrates detection of the periodic nature of the shaving activity. Further Example Gestures/Movements As further illustrated inFIGS.18-25additional motion gestures may be detected by analysis of the phase, frequency and/or amplitude of the sensor gesture channel signals. Although certain distances from the sensor are provided, it will be recognized that these distances may be altered depending on the configured detection range of the sensor. Gesture 1: Gesture 1 may be considered in reference toFIGS.18A-C. In this example, the sensor may be positioned a distance (e.g., 70 cm) from the centre of the chest. The sensor is spaced from the gesturing subject in the direction of the viewer ofFIG.18A(this is also the case with the subsequentFIGS.19A,20A,21A,22A,23A and24A). The furthest point during the gross motion may be approximately 55 cm from the sensor and the closest point may be approximately 45 cm. The furthest point may be measured in reference to the finger tips. As shown inFIG.18A, the hand movement is performed with the arm parallel to the sensor. Only the hand moves back and forth perpendicular to the sensor. The complete gesture 1 takes approximately 2 seconds. The motion may be performed from a sitting or standing position. As illustrated inFIGS.18B(10 repetitions of gesture) and18C (single gesture) features of any one or more of the phase, frequency and amplitude may be classified for detection of the gesture or the repeated gesture. Gesture 2: Gesture 2 may be considered in reference toFIGS.19A-C. In this example, the sensor was positioned approximately 70 cm from the centre of the chest. The gesture may be considered waving a hand in front of the sensor. The furthest point during the gross motion was approximately 50 cm from the sensor and the closest point was approximately 45 cm. The furthest point was measured to the finger tips at an angle of approximately 24 degrees from the sensor. As illustrated inFIG.19A, movement begins with the arm parallel to the sensor. The hand only moves back and forth, parallel to the sensor. The complete Gesture takes less than approximately 2 seconds. The motion may be performed while standing, lying or from in a sitting position. As illustrated inFIGS.19B(10 repetitions of gesture) and 19C (single gesture) features of any one or more of the phase, frequency and amplitude may be classified for detection of the gesture or the repeated gesture. Gesture 3: Gesture 3 may be considered in reference toFIGS.20A-C. In this example, the sensor was positioned approximately 70 cm from the centre of the chest. The furthest point during the gross motion was approximately 85 cm from the sensor and the closest point was 45 cm. The furthest point is measured in reference to the finger tips. The closest point is the shortest distance from the sensor to arm, rather than the finger tips. As illustrated inFIG.20A, the arm and hand movement begins with the arm parallel to the sensor. The arm is then crossed over the body before returning to the original position. The complete gesture takes approximately 2 seconds. The motion may be performed while standing, lying or from in a sitting position. As illustrated inFIGS.20B(10 repetitions of gesture) and20C (single gesture) features of any one or more of the phase, frequency and amplitude may be classified for detection of the gesture or the repeated gesture. Gesture 4: Gesture 4 may be considered in reference toFIGS.21A-C. In this example, the sensor was positioned approximately 70 cm from the centre of the chest. The furthest point during the gross motion was approximately 60 cm from the sensor and the closest point was approximately 45 cm. The furthest point is measured in reference to the finger tips. The closest point is the shortest distance from the sensor to the arm, rather than the finger tips. As shown inFIG.21A, the arm and hand movement begins with the arm raised, with the finger tips pointing in an upward direction, in parallel to the sensor. The arm moves in parallel to the sensor. The complete gesture takes less than approximately 2 seconds. The motion may be performed while standing, lying or from in a sitting position. As illustrated inFIGS.21B(10 repetitions of gesture) and21C (single gesture) features of any one or more of the phase, frequency and amplitude may be classified for detection of the gesture or the repeated gesture. Gesture 5: Gesture 5 may be considered in reference toFIGS.22A-C. In this example, the sensor was positioned approximately 95 cm from the centre of the chest. The furthest point during the gross motion was approximately 135 cm from the sensor and the closest point was approximately 5 cm. The closest and furthest points are measured in reference to the finger tips. As shown inFIG.22A, the movement begins with the arm fully extended. The hand is then swung completely across the body. The complete gesture takes less than approximately 4 seconds. The motion may be performed while standing, lying or from in a sitting position. As illustrated inFIGS.22B(10 repetitions of gesture) and 22C (single gesture) features of any one or more of the phase, frequency and amplitude may be classified for detection of the gesture or the repeated gesture or the repeated gesture. Gesture 6: Gesture 6 may be considered in reference toFIGS.23A-C. In this example, the sensor was positioned approximately 70 cm from the centre of the chest. The furthest point during the gross motion was approximately 95 cm from the sensor and the closest point was approximately 50 cm. The furthest point is measured in reference to the finger tips. The closest point is the shortest distance from the sensor to the shoulder, rather than the finger tips. As shown inFIG.23A, the arm and hand movement begins with the arm fully extended, above the head. The hand is then swung down in a 90 degree angle. The complete Gesture took approximately 3 seconds. The motion may be performed while standing, lying or from in a sitting position. As illustrated inFIGS.23B(10 repetitions of gesture) and23C (single gesture) features of any one or more of the phase, frequency and amplitude may be classified for detection of the gesture or the repeated gesture. Gesture 7: Gesture 7 may be considered in reference toFIGS.24A-C. In this example, the sensor was positioned approximately 70 cm from the centre of the chest. The furthest point during the gross motion was approximately 52 cm from the sensor and the closest point was approximately 50 cm. As shown inFIG.24A, the arm and hand movement begins with the arm parallel to the sensor and the palm of the hand facing upwards. The hand is then pulsed up approximately 15 cm before returning to the original positon. The complete gesture took approximately 2 seconds. The motion may be performed while standing, lying or from in a sitting position. As illustrated inFIGS.24B(10 repetitions of gesture) and24C (single gesture) features of any one or more of the phase, frequency and amplitude may be classified for detection of the gesture or the repeated gesture. Rollover Movement 1 Rollover detection may be considered in reference toFIGS.25A-B. For sleep information detection, a rollover may be taken as an indication that the person is having difficulty sleeping. In this example, the movement begins with a person on their back, for example. The person rolls onto their side towards the sensor which may take approximately 2 seconds. There may be a pause thereafter (such as about 1 second in the test example). The person then rolls away from the sensor to the initial position, which may take approximately 2 seconds. In the signal data of the figures, the complete movement takes 5 seconds (two rollovers). This is repeated 10 times in the data. As illustrated inFIGS.25A(10 repetitions of rollover motion) and25B (rollover), features of any one or more of the phase, frequency and amplitude may be classified for detection of the motion or the repeated motion. Rollover Movement 2 Rollover detection may be further considered in reference toFIGS.26A-B. In this example, the movement begins with the subject on their back, for example. The person will then roll onto their side away from the sensor which may take approximately 2 seconds. There may be a pause thereafter (such as about 1 second in the text example). The person may then roll back towards the sensor to the initial position. This may take approximately 2 seconds. In the signal data of the figures, the complete movement takes 5 seconds (two rollovers). This is repeated 10 times in the data. As illustrated inFIGS.26A(10 repetitions of rollover motion) and26B (rollover), features of any one or more of the phase, frequency and amplitude may be classified for detection of the motion or the repeated motion. Rollover Movement 3 Rollover detection may be further considered in reference toFIGS.27A-B. In this example, the movement is a little longer than that of the rollover ofFIG.26(rollover movement 2). The movement begins with the subject on their back, for example. The person will then roll onto their front away from the sensor which may take approximately 3 seconds. There may be a pause thereafter (such as about 1 second in the text example). The person may then roll back towards the sensor to initial position. This may take approximately 3 seconds. In the signal data of the figures, the complete movement takes 7 seconds (two rollovers). This is repeated 10 times in the data. As illustrated inFIGS.27A(10 repetitions of rollover motion) and27B (rollover), features of any one or more of the phase, frequency and amplitude may be classified for detection of the motion or the repeated motion. Rollover Movement 4 Rollover detection may be further considered in reference toFIGS.28A-B. In this example, the movement is a little longer than that of the rollover ofFIG.25(rollover movement 1). The movement begins with the subject on their back, for example. The person will then roll onto their front toward the sensor which may take approximately 3 seconds. There may be a pause thereafter (such as about 1 second in the text example). The person may then roll back away from the sensor to the initial position. This may take approximately 3 seconds. In the signal data of the figures, the complete movement takes 7 seconds (two rollovers). This is repeated 10 times in the data. As illustrated inFIGS.27A(10 repetitions of rollover motion) and27B (rollover), features of any one or more of the phase, frequency and amplitude may be classified for detection of the motion or the repeated motion. In one alternative approach, a global feature may be extracted directly from the spectrogram in order to provide a reference signature. As depicted in the gesture and rollover figures, a characteristic pattern for each gesture can be seen in the colour spectrogram. Such an approach may be performed by processing or analysing the colour information in the spectrogram pixels—e.g., in a block by block or region approach. Optionally, enhancement may be performed, including edge detection and enclosing specific patterns; this can be effective to remove or reduce noise in the surrounding pixels. The colour may be processed in, for example, RBG (red green blue) or CMYK (cyan magenta yellow black) depending on the colour space; each may be treated as a separate channel. Colour intensities can be separated by intensity value (e.g., low, low-medium, medium, medium-high, high or some other combination), and then passed into a classifier, such as a neural network. For example, considerFIG.22Cand the colour spectrogram and the processing images ofFIG.30. Edge enhancement here may be directed at capturing the outline of the red blobs, and rejecting the stippled blue/purple region. The shape of the red region with yellow streaks thus provides an initial signature (template) for this gesture type, and can be used in a supervised learning classifier. The variability of multiple iterations of this gesture (movement) is shown inFIG.22B, although the same basic shape and colour persists. This pattern can be averaged from this repeated movement, and provide a training input for this target gesture (movement). FIG.30shows images. From top to bottom panel: the top panel contains RGB colour space channels, the second panel depicts R (red) channel only, the third depicts G (green), the fourth B (blue) and the bottom B-D is the blue channel with blob detection applied to the intensity values, and shifting slightly to the left to remove the frequency component at the very left (shown in panels 1-4). The maximum frequency ranges from 170 Hz (rightmost “blob”) to 210 Hz (leftmost “blob”) Thus, as illustrated in reference toFIG.30(second image up from the bottom), the image data may be processed to split (seeFIG.22[C] from the original (top)) the colour data of the top image RGB into any one or more of the discrete red, green, blue color channels (top-middle image R (red), middle image G (green) and bottom two images B (blue) respectively) channels and selecting main blob areas. To the human eye, clearest signature is evident in the Blue channel (bottom); i.e., consider the black region (ignoring the vertical stripe to the left of the image). The bottom image B-D illustrates overlaid blob detection of the isolated blue channel. Such splitting/color separation and blob detection may be performed by suitable algorithm(s) of one or more processors of the system, such as part of a process involving feature detection and/or classification as described in more detail herein. Multiple RF Sensors (e.g., Stereo Sensor System): For an exemplar single RF sensor, the I and Q phase can be detected as the user moves towards or away from it. Movement perpendicular to a single sensor (across the face of the sensor) may have a much smaller relative phase change (e.g., a movement in an arc across the sensor's detection plane will have a very low or no phase change measurable). It is possible that additional sensors (e.g., a system with a second (and subsequent) sensor(s) placed adjacent to the first sensor (e.g., at an angle from each other)) can be employed to also detect signals of objects moving in and out. For example, a second sensor, may be positioned in the arc of the first sensor (e.g., the sensor might be at 45 degrees to the first sensor or orthogonal (at 90 degrees) or other appropriate differentiated angle with respect to first sensor). Thus, the effective stereo sensor system may more efficiently detect and characterise movement across various detection planes corresponding to the sensors (e.g., a movement perpendicular to the first sensor may be more clearly characterized by analysis of the signal of the second sensor). In such a case, the movement/gesture classification may take into account the signal information from both sensors (e.g., features derived from the phase output from both sensors). Such a system may for example return a different control signal based on the direction of motion in this manner. For a shaving analysis, the quadrant of the face (or other part of the body) could be determined. For a gaming implementation, a specific localised movement could be determined. Thus, two sensors can work cooperatively as a “stereo” system to detect and recognize gestures in two dimensions (2D), and three sensors can be used for identifying three dimensional characteristics (3D) of a gesture movement, using for example, range gated RF sensors. Thus, a single gesture may be characterized by obtaining detection signals from multiple sensors. For the two sensor 2-dimension case (i.e., 2D), the I/Q signals from each of a sensor 1 and sensor 2 (differential in I1, I2, Q1, Q2 for two sensor case—left and right), can be analyzed by a processor. The resulting difference in amplitude and phase provides an “x”, “y” output. In some cases, three sensors may be implemented in a cooperative system to add the “z” axis, in order to provide fine grained three-dimensional gesture recognition data in the resulting sensor field. In such as case, the differentials of I1, I2, I3, Q1, Q2, Q3 may be evaluated by a processor with the signals from the three sensor case to discriminate a single gesture. In some embodiments, a maximum phase may be obtained by placing at least two of the three sensors to be orthogonal to each other. In some versions, a multi-ray antenna (phase array antenna) may be implemented on one sensor if the antennas are separated. This may eliminate the need for a second sensor. In some cases, the RF sensor motion signals may, for example, be up-converted to audio frequencies such that a gesture may be detected (recognised) by a specific audio signature (e.g., by producing a sound from a speaker with the upconverted signals). This could allow a human to distinguish different types of gesture, or to utilise known audio recognition approaches to augment the classification and/or device training. In this specification, the word “comprising” is to be understood in its “open” sense, that is, in the sense of “including”, and thus not limited to its “closed” sense, that is the sense of “consisting only of”. A corresponding meaning is to be attributed to the corresponding words “comprise”, “comprised” and “comprises” where they appear. While particular embodiments of this technology have been described, it will be evident to those skilled in the art that the present technology may be embodied in other specific forms without departing from the essential characteristics thereof. The present embodiments and examples are therefore to be considered in all respects as illustrative and not restrictive. For example, whilst the disclosure has described the detection of movements such as hand/arm based gestures and roll-overs, the same principal is applicable to other large scale motions, such as user moving between a lying and a sitting position in bed (and vice versa), reaching for a specific target (a table lamp, or a respiratory apparatus) etc. It will further be understood that any reference herein to subject matter known in the field does not, unless the contrary indication appears, constitute an admission that such subject matter is commonly known by those skilled in the art to which the present technology relates. PARTS LIST Detection apparatus100Read step1502Feature Generation step1504Pre-testing setup step1505Training setup step1506Classification step1507Training classify step1508Video performance step1509Video performance step1510Check step1512
55,139